[RT PATCH] cputime: remove raw locks introduced by RT patchset

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

[RT PATCH] cputime: remove raw locks introduced by RT patchset

Clark Williams
Sebastian,

This patch removes the raw spinlock operations when updating cputtime
in the vtime_* functions in kernel/sched/cputime.c.

Based on Frederic's commit b7ce2277f087fd052, there is no need for
the raw spinlocks in vtime_* functions to guard against writer
concurrency and the RT versions of write_seqcount_begin() and
write_seqcount_end() make calls to preempt_disable_rt() and
preempt_enable_rt(), so we'll be in atomic context while updating
cputime.

I've run this patch on x86_64 4.4.6-rt14 and the RHEL-RT kernel,
with 12h rteval runs on systems with both no tuning and systems
with isolcpus/rcu_nocbs/nohz_full cpus. No ill effects seen.

Signed-off-by: Rik van Riel <[hidden email]>
Signed-off-by: Clark Williams <[hidden email]>
---
 kernel/sched/cputime.c | 18 ------------------
 1 file changed, 18 deletions(-)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 0f75a38cff96..9a823ced7e4a 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -696,45 +696,37 @@ static void __vtime_account_system(struct task_struct *tsk)
 
 void vtime_account_system(struct task_struct *tsk)
 {
- raw_spin_lock(&tsk->vtime_lock);
  write_seqcount_begin(&tsk->vtime_seq);
  __vtime_account_system(tsk);
  write_seqcount_end(&tsk->vtime_seq);
- raw_spin_unlock(&tsk->vtime_lock);
 }
 
 void vtime_gen_account_irq_exit(struct task_struct *tsk)
 {
- raw_spin_lock(&tsk->vtime_lock);
  write_seqcount_begin(&tsk->vtime_seq);
  __vtime_account_system(tsk);
  if (context_tracking_in_user())
  tsk->vtime_snap_whence = VTIME_USER;
  write_seqcount_end(&tsk->vtime_seq);
- raw_spin_unlock(&tsk->vtime_lock);
 }
 
 void vtime_account_user(struct task_struct *tsk)
 {
  cputime_t delta_cpu;
 
- raw_spin_lock(&tsk->vtime_lock);
  write_seqcount_begin(&tsk->vtime_seq);
  delta_cpu = get_vtime_delta(tsk);
  tsk->vtime_snap_whence = VTIME_SYS;
  account_user_time(tsk, delta_cpu, cputime_to_scaled(delta_cpu));
  write_seqcount_end(&tsk->vtime_seq);
- raw_spin_unlock(&tsk->vtime_lock);
 }
 
 void vtime_user_enter(struct task_struct *tsk)
 {
- raw_spin_lock(&tsk->vtime_lock);
  write_seqcount_begin(&tsk->vtime_seq);
  __vtime_account_system(tsk);
  tsk->vtime_snap_whence = VTIME_USER;
  write_seqcount_end(&tsk->vtime_seq);
- raw_spin_unlock(&tsk->vtime_lock);
 }
 
 void vtime_guest_enter(struct task_struct *tsk)
@@ -746,23 +738,19 @@ void vtime_guest_enter(struct task_struct *tsk)
  * synchronization against the reader (task_gtime())
  * that can thus safely catch up with a tickless delta.
  */
- raw_spin_lock(&tsk->vtime_lock);
  write_seqcount_begin(&tsk->vtime_seq);
  __vtime_account_system(tsk);
  current->flags |= PF_VCPU;
  write_seqcount_end(&tsk->vtime_seq);
- raw_spin_unlock(&tsk->vtime_lock);
 }
 EXPORT_SYMBOL_GPL(vtime_guest_enter);
 
 void vtime_guest_exit(struct task_struct *tsk)
 {
- raw_spin_lock(&tsk->vtime_lock);
  write_seqcount_begin(&tsk->vtime_seq);
  __vtime_account_system(tsk);
  current->flags &= ~PF_VCPU;
  write_seqcount_end(&tsk->vtime_seq);
- raw_spin_unlock(&tsk->vtime_lock);
 }
 EXPORT_SYMBOL_GPL(vtime_guest_exit);
 
@@ -775,30 +763,24 @@ void vtime_account_idle(struct task_struct *tsk)
 
 void arch_vtime_task_switch(struct task_struct *prev)
 {
- raw_spin_lock(&prev->vtime_lock);
  write_seqcount_begin(&prev->vtime_seq);
  prev->vtime_snap_whence = VTIME_SLEEPING;
  write_seqcount_end(&prev->vtime_seq);
- raw_spin_unlock(&prev->vtime_lock);
 
- raw_spin_lock(&current->vtime_lock);
  write_seqcount_begin(&current->vtime_seq);
  current->vtime_snap_whence = VTIME_SYS;
  current->vtime_snap = sched_clock_cpu(smp_processor_id());
  write_seqcount_end(&current->vtime_seq);
- raw_spin_unlock(&current->vtime_lock);
 }
 
 void vtime_init_idle(struct task_struct *t, int cpu)
 {
  unsigned long flags;
 
- raw_spin_lock_irqsave(&t->vtime_lock, flags);
  write_seqcount_begin(&t->vtime_seq);
  t->vtime_snap_whence = VTIME_SYS;
  t->vtime_snap = sched_clock_cpu(cpu);
  write_seqcount_end(&t->vtime_seq);
- raw_spin_unlock_irqrestore(&t->vtime_lock, flags);
 }
 
 cputime_t task_gtime(struct task_struct *t)
--
2.5.5


attachment0 (817 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [RT PATCH] cputime: remove raw locks introduced by RT patchset

Sebastian Andrzej Siewior-4
On 04/15/2016 05:09 AM, Clark Williams wrote:
> Sebastian,

Hi Clark,

> This patch removes the raw spinlock operations when updating cputtime
> in the vtime_* functions in kernel/sched/cputime.c.
>
> Based on Frederic's commit b7ce2277f087fd052, there is no need for
> the raw spinlocks in vtime_* functions to guard against writer
> concurrency and the RT versions of write_seqcount_begin() and
> write_seqcount_end() make calls to preempt_disable_rt() and
> preempt_enable_rt(), so we'll be in atomic context while updating
> cputime.
>
> I've run this patch on x86_64 4.4.6-rt14 and the RHEL-RT kernel,
> with 12h rteval runs on systems with both no tuning and systems
> with isolcpus/rcu_nocbs/nohz_full cpus. No ill effects seen.

Frederic's patch made it into 4.5-rc1. What I did yesterday was:

* sched/cputime: Convert vtime_seqlock to seqcount
* sched/cputime: Clarify vtime symbols and document them
* Revert "vtime: Split lock and seqcount"

which is what you want, correct?
>
> Signed-off-by: Rik van Riel <[hidden email]>
> Signed-off-by: Clark Williams <[hidden email]>

Sebastian

Reply | Threaded
Open this post in threaded view
|

Re: [RT PATCH] cputime: remove raw locks introduced by RT patchset

Clark Williams
On Fri, 15 Apr 2016 09:13:34 +0200
Sebastian Andrzej Siewior <[hidden email]> wrote:

> Frederic's patch made it into 4.5-rc1. What I did yesterday was:
>
> * sched/cputime: Convert vtime_seqlock to seqcount
> * sched/cputime: Clarify vtime symbols and document them
> * Revert "vtime: Split lock and seqcount"
>
> which is what you want, correct?
> >
> > Signed-off-by: Rik van Riel <[hidden email]>
> > Signed-off-by: Clark Williams <[hidden email]>  
>
> Sebastian
>
Yes, that's it. I just wanted to get rid of the redundant locking operations and reduce the accounting overhead by a bit.

Thanks!
Clark

attachment0 (817 bytes) Download Attachment