Skip to content

Commit 058ebd0

Browse files
Peter ZijlstraIngo Molnar
Peter Zijlstra
authored and
Ingo Molnar
committed
perf: Fix perf_lock_task_context() vs RCU
Jiri managed to trigger this warning: [] ====================================================== [] [ INFO: possible circular locking dependency detected ] [] 3.10.0+ raspberrypi#228 Tainted: G W [] ------------------------------------------------------- [] p/6613 is trying to acquire lock: [] (rcu_node_0){..-...}, at: [<ffffffff810ca797>] rcu_read_unlock_special+0xa7/0x250 [] [] but task is already holding lock: [] (&ctx->lock){-.-...}, at: [<ffffffff810f2879>] perf_lock_task_context+0xd9/0x2c0 [] [] which lock already depends on the new lock. [] [] the existing dependency chain (in reverse order) is: [] [] -> #4 (&ctx->lock){-.-...}: [] -> #3 (&rq->lock){-.-.-.}: [] -> #2 (&p->pi_lock){-.-.-.}: [] -> #1 (&rnp->nocb_gp_wq[1]){......}: [] -> #0 (rcu_node_0){..-...}: Paul was quick to explain that due to preemptible RCU we cannot call rcu_read_unlock() while holding scheduler (or nested) locks when part of the read side critical section was preemptible. Therefore solve it by making the entire RCU read side non-preemptible. Also pull out the retry from under the non-preempt to play nice with RT. Reported-by: Jiri Olsa <[email protected]> Helped-out-by: Paul E. McKenney <[email protected]> Cc: <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
1 parent 06f4179 commit 058ebd0

File tree

1 file changed

+14
-1
lines changed

1 file changed

+14
-1
lines changed

kernel/events/core.c

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -947,8 +947,18 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
947947
{
948948
struct perf_event_context *ctx;
949949

950-
rcu_read_lock();
951950
retry:
951+
/*
952+
* One of the few rules of preemptible RCU is that one cannot do
953+
* rcu_read_unlock() while holding a scheduler (or nested) lock when
954+
* part of the read side critical section was preemptible -- see
955+
* rcu_read_unlock_special().
956+
*
957+
* Since ctx->lock nests under rq->lock we must ensure the entire read
958+
* side critical section is non-preemptible.
959+
*/
960+
preempt_disable();
961+
rcu_read_lock();
952962
ctx = rcu_dereference(task->perf_event_ctxp[ctxn]);
953963
if (ctx) {
954964
/*
@@ -964,6 +974,8 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
964974
raw_spin_lock_irqsave(&ctx->lock, *flags);
965975
if (ctx != rcu_dereference(task->perf_event_ctxp[ctxn])) {
966976
raw_spin_unlock_irqrestore(&ctx->lock, *flags);
977+
rcu_read_unlock();
978+
preempt_enable();
967979
goto retry;
968980
}
969981

@@ -973,6 +985,7 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
973985
}
974986
}
975987
rcu_read_unlock();
988+
preempt_enable();
976989
return ctx;
977990
}
978991

0 commit comments

Comments
 (0)