Quantcast

[PATCH v12 00/10] arm64: Add kernel probes (kprobes) support

classic Classic list List threaded Threaded
35 messages Options
12
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 01/10] arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature

Huang Shijie-4
On Wed, Apr 27, 2016 at 02:52:56PM -0400, David Long wrote:

> From: "David A. Long" <[hidden email]>
> +/**
> + * regs_within_kernel_stack() - check the address in the stack
> + * @regs:      pt_regs which contains kernel stack pointer.
> + * @addr:      address which is checked.
> + *
> + * regs_within_kernel_stack() checks @addr is within the kernel stack page(s).
> + * If @addr is within the kernel stack, it returns true. If not, returns false.
> + */
> +bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr)
> +{
> + return ((addr & ~(THREAD_SIZE - 1))  ==
> + (kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1))) ||
> + on_irq_stack(addr, raw_smp_processor_id());
> +}
> +
> +/**
> + * regs_get_kernel_stack_nth() - get Nth entry of the stack
> + * @regs: pt_regs which contains kernel stack pointer.
> + * @n: stack entry number.
> + *
> + * regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which
> + * is specified by @regs. If the @n th entry is NOT in the kernel stack,
> + * this returns 0.
> + */
> +unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
> +{
> + unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs);
> +
> + addr += n;
> + if (regs_within_kernel_stack(regs, (unsigned long)addr))
If the @addr fall in the interrupt stack, the regs_within_kernel_stack()
will return true. But Is it what we want?

thanks
Huang Shijie

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 10/10] kprobes: Add arm64 case in kprobe example module

Huang Shijie-4
In reply to this post by David Long
On Wed, Apr 27, 2016 at 02:53:05PM -0400, David Long wrote:

> From: Sandeepa Prabhu <[hidden email]>
>
> Add info prints in sample kprobe handlers for ARM64
>
> Signed-off-by: Sandeepa Prabhu <[hidden email]>
> ---
>  samples/kprobes/kprobe_example.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/samples/kprobes/kprobe_example.c b/samples/kprobes/kprobe_example.c
> index 727eb21..0c72b8a 100644
> --- a/samples/kprobes/kprobe_example.c
> +++ b/samples/kprobes/kprobe_example.c
> @@ -42,6 +42,10 @@ static int handler_pre(struct kprobe *p, struct pt_regs *regs)
>   " ex1 = 0x%lx\n",
>   p->addr, regs->pc, regs->ex1);
>  #endif
> +#ifdef CONFIG_ARM64
> + pr_info("pre_handler: p->addr = 0x%p, pc = 0x%lx\n",

I think you miss the KERN_INFO here.

> + p->addr, (long)regs->pc);
> +#endif
>  
>   /* A dump_stack() here will give a stack backtrace */
>   return 0;
> @@ -67,6 +71,10 @@ static void handler_post(struct kprobe *p, struct pt_regs *regs,
>   printk(KERN_INFO "post_handler: p->addr = 0x%p, ex1 = 0x%lx\n",
>   p->addr, regs->ex1);
>  #endif
> +#ifdef CONFIG_ARM64
> + pr_info("post_handler: p->addr = 0x%p, pc = 0x%lx\n",
ditto.
> + p->addr, (long)regs->pc);
> +#endif
thanks
Huang Shijie

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 10/10] kprobes: Add arm64 case in kprobe example module

Mark Brown-2
On Tue, May 17, 2016 at 05:57:33PM +0800, Huang Shijie wrote:
> On Wed, Apr 27, 2016 at 02:53:05PM -0400, David Long wrote:

> > +#ifdef CONFIG_ARM64
> > + pr_info("pre_handler: p->addr = 0x%p, pc = 0x%lx\n",

> I think you miss the KERN_INFO here.

That's what pr_info() does over printk() - it adds the KERN_INFO more
cleanly.

signature.asc (484 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 10/10] kprobes: Add arm64 case in kprobe example module

Huang Shijie-4
On Tue, May 17, 2016 at 11:24:27AM +0100, Mark Brown wrote:

> On Tue, May 17, 2016 at 05:57:33PM +0800, Huang Shijie wrote:
> > On Wed, Apr 27, 2016 at 02:53:05PM -0400, David Long wrote:
>
> > > +#ifdef CONFIG_ARM64
> > > + pr_info("pre_handler: p->addr = 0x%p, pc = 0x%lx\n",
>
> > I think you miss the KERN_INFO here.
>
> That's what pr_info() does over printk() - it adds the KERN_INFO more
> cleanly.
sorry, I thought the "pr_info" to "printk" when I first read this code.

thanks
Huang Shijie

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 00/10] arm64: Add kernel probes (kprobes) support

Huang Shijie-4
In reply to this post by Li Bin
On Thu, May 12, 2016 at 10:26:40AM +0800, Li Bin wrote:

>
>
> on 2016/5/11 23:33, James Morse wrote:
> > Hi David,
> >
> > On 27/04/16 19:52, David Long wrote:
> >> From: "David A. Long" <[hidden email]>
> >>
> >> This patchset is heavily based on Sandeepa Prabhu's ARM v8 kprobes patches,
> >> first seen in October 2013. This version attempts to address concerns raised by
> >> reviewers and also fixes problems discovered during testing.
> >>
> >> This patchset adds support for kernel probes(kprobes), jump probes(jprobes)
> >> and return probes(kretprobes) support for ARM64.
> >>
> >> The kprobes mechanism makes use of software breakpoint and single stepping
> >> support available in the ARM v8 kernel.
> >
> > I applied this series on v4.6-rc7, and built the sample kprobes. They work fine,
> > unless I throw ftrace into the mix too.
> >
> > I enabled the function_graph tracer, then tried to load the jprobe example module:
> > -------------------------%<-------------------------
> > root@ubuntu:/sys/kernel/debug/tracing# insmod /root/jprobe_example.ko
> > Planted jprobe at ffffff80080c8f20, handler addr ffffff8000bb3000
> > root@ubuntu:/sys/kernel/debug/tracing# jprobe: clone_flags = 0x1200011, stack_st
> > art = 0x0 stack_size = 0x0
> > Bad mode in Synchronous Abort handler detected, code 0x86000005 -- IABT (current
> >  EL)
> > CPU: 5 PID: 1047 Comm: systemd-udevd Not tainted 4.6.0-rc7+ #4064
> > Hardware name: ARM Juno development board (r1) (DT)
> > task: ffffffc975948300 ti: ffffffc974e4c000 task.ti: ffffffc974e4c000
> > PC is at 0x0
> > LR is at 0x0
> >
> > pc : [<0000000000000000>] lr : [<0000000000000000>] pstate: 60000145
> > sp : ffffffc974e4ff00
> > x29: 0000000001200011 x28: ffffffc974e4c000
> > x27: ffffff80088d0000 x26: 00000000000000dc
> > x25: 0000000000000120 x24: 0000000000000015
> > x23: 0000000060000000 x22: 0000007fa1b40e60
> > x21: 0000007fa1ce70d0 x20: 0000000000000000
> > x19: 0000000000000000 x18: 0000000000000a03
> > x17: 0000007fa1b40d90 x16: ffffff80080c9708
> > x15: 003b9aca00000000 x14: 0000007fddb7e5c0
> > x13: 0000007fa1b40e2c x12: 0000000000d00ff0
> > x11: ffffff8009c4d000 x10: ffffff800920c000
> > x9 : ffffff8008f5c000 x8 : ffffffc976c06800
> > x7 : 000000000006daf2 x6 : 0000000000000015
> > x5 : 0000000000000004 x4 : ffffffc96e8690a0
> > x3 : 0000001ed7cbab74 x2 : ffffffc96e869000
> > x1 : 0000000000000000 x0 : 0000000000000000
> >
> > Internal error: Oops - bad mode: 0 [#1] PREEMPT SMP
> > Modules linked in: jprobe_example
> > CPU: 5 PID: 1047 Comm: systemd-udevd Not tainted 4.6.0-rc7+ #4064
> > Hardware name: ARM Juno development board (r1) (DT)
> > task: ffffffc975948300 ti: ffffffc974e4c000 task.ti: ffffffc974e4c000
> > PC is at 0x0
> > LR is at 0x0
> >
> > pc : [<0000000000000000>] lr : [<0000000000000000>] pstate: 60000145
> > sp : ffffffc974e4ff00
> > x29: 0000000001200011 x28: ffffffc974e4c000
> > x27: ffffff80088d0000 x26: 00000000000000dc
> > x25: 0000000000000120 x24: 0000000000000015
> > x23: 0000000060000000 x22: 0000007fa1b40e60
> > x21: 0000007fa1ce70d0 x20: 0000000000000000
> > x19: 0000000000000000 x18: 0000000000000a03
> > x17: 0000007fa1b40d90 x16: ffffff80080c9708
> > x15: 003b9aca00000000 x14: 0000007fddb7e5c0
> > x13: 0000007fa1b40e2c x12: 0000000000d00ff0
> > x11: ffffff8009c4d000 x10: ffffff800920c000
> > x9 : ffffff8008f5c000 x8 : ffffffc976c06800
> > x7 : 000000000006daf2 x6 : 0000000000000015
> > x5 : 0000000000000004 x4 : ffffffc96e8690a0
> > x3 : 0000001ed7cbab74 x2 : ffffffc96e869000
> > x1 : 0000000000000000 x0 : 0000000000000000
> >
> > Process systemd-udevd (pid: 1047, stack limit = 0xffffffc974e4c020)
> > Stack: (0xffffffc974e4ff00 to 0xffffffc974e50000)
> > ff00: 0000000000000417 0000007fa1ce76f0 00000000000000dc 0000000000000417
> > ff20: 00000000ffffffff 0000007fddb7ecf8 0000000000000005 ffffffffffffffff
> > ff40: 00000000ff000001 003b9aca00000000 000000555b3868b0 0000007fa1b40d90
> > ff60: 0000000000000a03 0000007fddb7e5c0 0000000000000000 0000007fddb7e5e0
> > ff80: 000000555b358000 000000558f56f0e0 0000000000000000 000000558f574f00
> > ffa0: 000000558f574f00 00000000000004fa 000000558f56f010 0000007fddb7e600
> > ffc0: 0000007fa1b40e2c 0000007fddb7e5c0 0000007fa1b40e60 0000000060000000
> > ffe0: 0000000001200011 00000000000000dc 0004000084000200 0800000002000000
> > Call trace:
> > [<          (null)>]           (null)
> > Code: bad PC value
> > ---[ end trace 35d24aad799c2941 ]---
> > -------------------------%<-------------------------
> >
>
> To solve this, it should pause function tracing before the jprobe handler is called
> and unpause it before it returns back to the function it probed.
>
> diff --git a/arch/arm64/kernel/kprobes.c b/arch/arm64/kernel/kprobes.c
> index db2d95c..b21ed00 100644
> --- a/arch/arm64/kernel/kprobes.c
> +++ b/arch/arm64/kernel/kprobes.c
> @@ -714,6 +714,7 @@ int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
>
>         instruction_pointer_set(regs, (long)jp->entry);
>         preempt_disable();
> +       pause_graph_tracing();
>         return 1;
>  }
>
> @@ -757,6 +758,7 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
>                         show_regs(regs);
>                         BUG();
>                 }
> +               unpause_graph_tracing();
>                 *regs = kcb->jprobe_saved_regs;
>                 memcpy((void *)stack_addr, kcb->jprobes_stack,
>                        MIN_STACK_SIZE(stack_addr));
>
>
I tested this fix, it works.

thanks
Huang Shijie

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 05/10] arm64: Kprobes with single stepping support

Masami Hiramatsu-2
In reply to this post by Huang Shijie-4
On Tue, 17 May 2016 16:58:09 +0800
Huang Shijie <[hidden email]> wrote:

> On Wed, Apr 27, 2016 at 02:53:00PM -0400, David Long wrote:
> > +
> > +/*
> > + * Interrupts need to be disabled before single-step mode is set, and not
> > + * reenabled until after single-step mode ends.
> > + * Without disabling interrupt on local CPU, there is a chance of
> > + * interrupt occurrence in the period of exception return and  start of
> > + * out-of-line single-step, that result in wrongly single stepping
> > + * into the interrupt handler.
> > + */
> > +static void __kprobes kprobes_save_local_irqflag(struct pt_regs *regs)
> > +{
> > +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>
> Why not add a parameter for this function to save the @kcb?

Good catch, it should use same kcb of caller.

>
> > +
> > +     kcb->saved_irqflag = regs->pstate;
> > +     regs->pstate |= PSR_I_BIT;
> > +}
> > +
> > +static void __kprobes kprobes_restore_local_irqflag(struct pt_regs *regs)
> > +{
> > +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> ditto.
>
> > +
> > +     if (kcb->saved_irqflag & PSR_I_BIT)
> > +             regs->pstate |= PSR_I_BIT;
> > +     else
> > +             regs->pstate &= ~PSR_I_BIT;
> > +}
> > +
> > +static void __kprobes
> > +set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr)
> > +{
> > +     kcb->ss_ctx.ss_pending = true;
> > +     kcb->ss_ctx.match_addr = addr + sizeof(kprobe_opcode_t);
> > +}
> > +
> > +static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
> > +{
> > +     kcb->ss_ctx.ss_pending = false;
> > +     kcb->ss_ctx.match_addr = 0;
> > +}
> > +
> > +static void __kprobes setup_singlestep(struct kprobe *p,
> > +                                    struct pt_regs *regs,
> > +                                    struct kprobe_ctlblk *kcb, int reenter)
> > +{
> > +     unsigned long slot;
> > +
> > +     if (reenter) {
> > +             save_previous_kprobe(kcb);
> > +             set_current_kprobe(p);
> > +             kcb->kprobe_status = KPROBE_REENTER;
> > +     } else {
> > +             kcb->kprobe_status = KPROBE_HIT_SS;
> > +     }
> > +
> > +     if (p->ainsn.insn) {
> > +             /* prepare for single stepping */
> > +             slot = (unsigned long)p->ainsn.insn;
> > +
> > +             set_ss_context(kcb, slot);      /* mark pending ss */
> > +
> > +             if (kcb->kprobe_status == KPROBE_REENTER)
> > +                     spsr_set_debug_flag(regs, 0);
> > +
> > +             /* IRQs and single stepping do not mix well. */
> > +             kprobes_save_local_irqflag(regs);
> > +             kernel_enable_single_step(regs);
> > +             instruction_pointer(regs) = slot;
> > +     } else  {
> > +             BUG();

You'd better use BUG_ON(!p->ainsn.insn);

> > +     }
> > +}
> > +
> > +static int __kprobes reenter_kprobe(struct kprobe *p,
> > +                                 struct pt_regs *regs,
> > +                                 struct kprobe_ctlblk *kcb)
> > +{
> > +     switch (kcb->kprobe_status) {
> > +     case KPROBE_HIT_SSDONE:
> > +     case KPROBE_HIT_ACTIVE:
> > +             kprobes_inc_nmissed_count(p);
> > +             setup_singlestep(p, regs, kcb, 1);
> > +             break;
> > +     case KPROBE_HIT_SS:
> > +     case KPROBE_REENTER:
> > +             pr_warn("Unrecoverable kprobe detected at %p.\n", p->addr);
> > +             dump_kprobe(p);
> > +             BUG();
> > +             break;
> > +     default:
> > +             WARN_ON(1);
> > +             return 0;
> > +     }
> > +
> > +     return 1;
> > +}
> > +
> > +static void __kprobes
> > +post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
> > +{
> > +     struct kprobe *cur = kprobe_running();
> > +
> > +     if (!cur)
> > +             return;
> > +
> > +     /* return addr restore if non-branching insn */
> > +     if (cur->ainsn.restore.type == RESTORE_PC) {
> > +             instruction_pointer(regs) = cur->ainsn.restore.addr;
> > +             if (!instruction_pointer(regs))
> > +                     BUG();
> > +     }
> > +
> > +     /* restore back original saved kprobe variables and continue */
> > +     if (kcb->kprobe_status == KPROBE_REENTER) {
> > +             restore_previous_kprobe(kcb);
> > +             return;
> > +     }
> > +     /* call post handler */
> > +     kcb->kprobe_status = KPROBE_HIT_SSDONE;
> > +     if (cur->post_handler)  {
> > +             /* post_handler can hit breakpoint and single step
> > +              * again, so we enable D-flag for recursive exception.
> > +              */
> > +             cur->post_handler(cur, regs, 0);
> > +     }
> > +
> > +     reset_current_kprobe();
> > +}
> > +
> > +int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
> > +{
> > +     struct kprobe *cur = kprobe_running();
> > +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +
> > +     switch (kcb->kprobe_status) {
> > +     case KPROBE_HIT_SS:
> > +     case KPROBE_REENTER:
> > +             /*
> > +              * We are here because the instruction being single
> > +              * stepped caused a page fault. We reset the current
> > +              * kprobe and the ip points back to the probe address
> > +              * and allow the page fault handler to continue as a
> > +              * normal page fault.
> > +              */
> > +             instruction_pointer(regs) = (unsigned long)cur->addr;
> > +             if (!instruction_pointer(regs))
> > +                     BUG();

This can be BUG_ON(!instruction_pointer(regs)).

> > +             if (kcb->kprobe_status == KPROBE_REENTER)
> > +                     restore_previous_kprobe(kcb);
> > +             else
> > +                     reset_current_kprobe();
> > +
> > +             break;
> > +     case KPROBE_HIT_ACTIVE:
> > +     case KPROBE_HIT_SSDONE:
> > +             /*
> > +              * We increment the nmissed count for accounting,
> > +              * we can also use npre/npostfault count for accounting
> > +              * these specific fault cases.
> > +              */
> > +             kprobes_inc_nmissed_count(cur);
> > +
> > +             /*
> > +              * We come here because instructions in the pre/post
> > +              * handler caused the page_fault, this could happen
> > +              * if handler tries to access user space by
> > +              * copy_from_user(), get_user() etc. Let the
> > +              * user-specified handler try to fix it first.
> > +              */
> > +             if (cur->fault_handler && cur->fault_handler(cur, regs, fsr))
> > +                     return 1;
> > +
> > +             /*
> > +              * In case the user-specified fault handler returned
> > +              * zero, try to fix up.
> > +              */
> > +             if (fixup_exception(regs))
> > +                     return 1;
> > +     }
> > +     return 0;
> > +}
> > +
> > +int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
> > +                                    unsigned long val, void *data)
> > +{
> > +     return NOTIFY_DONE;
> > +}
> > +
> > +static void __kprobes kprobe_handler(struct pt_regs *regs)
> > +{
> > +     struct kprobe *p, *cur_kprobe;
> > +     struct kprobe_ctlblk *kcb;
> > +     unsigned long addr = instruction_pointer(regs);
> > +
> > +     kcb = get_kprobe_ctlblk();
> > +     cur_kprobe = kprobe_running();
> > +
> > +     p = get_kprobe((kprobe_opcode_t *) addr);
> > +
> > +     if (p) {
> > +             if (cur_kprobe) {
> > +                     if (reenter_kprobe(p, regs, kcb))
> > +                             return;
> > +             } else {
> > +                     /* Probe hit */
> > +                     set_current_kprobe(p);
> > +                     kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > +
> > +                     /*
> > +                      * If we have no pre-handler or it returned 0, we
> > +                      * continue with normal processing.  If we have a
> > +                      * pre-handler and it returned non-zero, it prepped
> > +                      * for calling the break_handler below on re-entry,
> > +                      * so get out doing nothing more here.
> > +                      *
> > +                      * pre_handler can hit a breakpoint and can step thru
> > +                      * before return, keep PSTATE D-flag enabled until
> > +                      * pre_handler return back.
> > +                      */
> > +                     if (!p->pre_handler || !p->pre_handler(p, regs)) {
> > +                             kcb->kprobe_status = KPROBE_HIT_SS;
> The above line is duplicated.
> You will set KPROBE_HIT_SS in the setup_singlestep.

Right.

>
> > +                             setup_singlestep(p, regs, kcb, 0);
> > +                             return;
> > +                     }
> > +             }
> > +     } else if ((le32_to_cpu(*(kprobe_opcode_t *) addr) ==
> > +         BRK64_OPCODE_KPROBES) && cur_kprobe) {
> > +             /* We probably hit a jprobe.  Call its break handler. */
> > +             if (cur_kprobe->break_handler  &&
> > +                  cur_kprobe->break_handler(cur_kprobe, regs)) {
> > +                     kcb->kprobe_status = KPROBE_HIT_SS;
> ditto
> > +                     setup_singlestep(cur_kprobe, regs, kcb, 0);
> > +                     return;
> > +             }
> > +     }
> > +     /*
> > +      * The breakpoint instruction was removed right
> > +      * after we hit it.  Another cpu has removed
> > +      * either a probepoint or a debugger breakpoint
> > +      * at this address.  In either case, no further
> > +      * handling of this interrupt is appropriate.
> > +      * Return back to original instruction, and continue.
> > +      */
> > +}

Thanks,



--
Masami Hiramatsu <[hidden email]>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 05/10] arm64: Kprobes with single stepping support

Masami Hiramatsu-2
In reply to this post by James Morse
On Thu, 12 May 2016 16:01:54 +0100
James Morse <[hidden email]> wrote:

> Hi David, Sandeepa,
>
> On 27/04/16 19:53, David Long wrote:
> > From: Sandeepa Prabhu <[hidden email]>
>
> > diff --git a/arch/arm64/kernel/kprobes.c b/arch/arm64/kernel/kprobes.c
> > new file mode 100644
> > index 0000000..dfa1b1f
> > --- /dev/null
> > +++ b/arch/arm64/kernel/kprobes.c
> > @@ -0,0 +1,520 @@
> > +/*
> > + * arch/arm64/kernel/kprobes.c
> > + *
> > + * Kprobes support for ARM64
> > + *
> > + * Copyright (C) 2013 Linaro Limited.
> > + * Author: Sandeepa Prabhu <[hidden email]>
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License version 2 as
> > + * published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > + * General Public License for more details.
> > + *
> > + */
> > +#include <linux/kernel.h>
> > +#include <linux/kprobes.h>
> > +#include <linux/module.h>
> > +#include <linux/slab.h>
> > +#include <linux/stop_machine.h>
> > +#include <linux/stringify.h>
> > +#include <asm/traps.h>
> > +#include <asm/ptrace.h>
> > +#include <asm/cacheflush.h>
> > +#include <asm/debug-monitors.h>
> > +#include <asm/system_misc.h>
> > +#include <asm/insn.h>
> > +#include <asm/uaccess.h>
> > +
> > +#include "kprobes-arm64.h"
> > +
> > +#define MIN_STACK_SIZE(addr) min((unsigned long)MAX_STACK_SIZE, \
> > + (unsigned long)current_thread_info() + THREAD_START_SP - (addr))
>
> What if we probe something called on the irq stack?
> This needs the on_irq_stack() checks too, the start/end can be found from the
> per-cpu irq_stack value.
>
> [ ... ]
>
> > +int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
> > +{
> > + struct jprobe *jp = container_of(p, struct jprobe, kp);
> > + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > + long stack_ptr = kernel_stack_pointer(regs);
> > +
> > + kcb->jprobe_saved_regs = *regs;
> > + memcpy(kcb->jprobes_stack, (void *)stack_ptr,
> > +       MIN_STACK_SIZE(stack_ptr));
>
> I wonder if we need this stack save/restore?
>
> The comment next to the equivalent code for x86 says:
> > gcc assumes that the callee owns the argument space and could overwrite it,
> > e.g. tailcall optimization. So, to be absolutely safe we also save and
> > restore enough stack bytes to cover the argument area.
>
> On arm64 the first eight arguments are passed in registers, so we might not need
> this stack copy. (sparc and powerpc work like this too, their versions of this
> function don't copy chunks of the stack).

Hmm, maybe sparc and powerpc implementation should also be fixed...

> ... then I went looking for functions with >8 arguments...
>
> Looking at the arm64 defconfig dwarf debug data, there are 71 of these that
> don't get inlined, picking at random:
> > rockchip_clk_register_pll() has 13
> > fib_dump_info() has 11
> > vma_merge() has 10
> > vring_create_virtqueue() has 10
> etc...
>
> So we do need this stack copying, so that we can probe these function without
> risking the arguments being modified.
>
> It may be worth including a comment to the effect that this stack save/restore
> is needed for functions that pass >8 arguments where the pre-handler may change
> these values on the stack.

Indeed, commenting on this code can help us to understand the reason why.

Thank you!

>
>
> > + preempt_enable_no_resched();
> > + return 1;
> > +}
> > +
>
>
> Thanks,
>
> James


--
Masami Hiramatsu <[hidden email]>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 07/10] arm64: kprobes instruction simulation support

Huang Shijie-4
In reply to this post by David Long
On Wed, Apr 27, 2016 at 02:53:02PM -0400, David Long wrote:

> From: Sandeepa Prabhu <[hidden email]>
>
> Kprobes needs simulation of instructions that cannot be stepped
> from a different memory location, e.g.: those instructions
> that uses PC-relative addressing. In simulation, the behaviour
> of the instruction is implemented using a copy of pt_regs.
>
> The following instruction categories are simulated:
>  - All branching instructions(conditional, register, and immediate)
>  - Literal access instructions(load-literal, adr/adrp)
>
> Conditional execution is limited to branching instructions in
> ARM v8. If conditions at PSTATE do not match the condition fields
> of opcode, the instruction is effectively NOP.
>
> Thanks to Will Cohen for assorted suggested changes.
>
> Signed-off-by: Sandeepa Prabhu <[hidden email]>
> Signed-off-by: William Cohen <[hidden email]>
> Signed-off-by: David A. Long <[hidden email]>
> ---
>  arch/arm64/include/asm/insn.h            |   1 +
>  arch/arm64/include/asm/probes.h          |   5 +-
>  arch/arm64/kernel/Makefile               |   3 +-
>  arch/arm64/kernel/insn.c                 |   1 +
>  arch/arm64/kernel/kprobes-arm64.c        |  29 ++++
>  arch/arm64/kernel/kprobes.c              |  32 ++++-
>  arch/arm64/kernel/probes-simulate-insn.c | 218 +++++++++++++++++++++++++++++++
>  arch/arm64/kernel/probes-simulate-insn.h |  28 ++++
>  8 files changed, 311 insertions(+), 6 deletions(-)
>  create mode 100644 arch/arm64/kernel/probes-simulate-insn.c
>  create mode 100644 arch/arm64/kernel/probes-simulate-insn.h
>
> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
> index b9567a1..26cee10 100644
> --- a/arch/arm64/include/asm/insn.h
> +++ b/arch/arm64/include/asm/insn.h
> @@ -410,6 +410,7 @@ u32 aarch32_insn_mcr_extract_crm(u32 insn);
>  
>  typedef bool (pstate_check_t)(unsigned long);
>  extern pstate_check_t * const opcode_condition_checks[16];
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif /* __ASM_INSN_H */
> diff --git a/arch/arm64/include/asm/probes.h b/arch/arm64/include/asm/probes.h
> index c5fcbe6..d524f7d 100644
> --- a/arch/arm64/include/asm/probes.h
> +++ b/arch/arm64/include/asm/probes.h
> @@ -15,11 +15,12 @@
>  #ifndef _ARM_PROBES_H
>  #define _ARM_PROBES_H
>  
> +#include <asm/opcodes.h>
> +
>  struct kprobe;
>  struct arch_specific_insn;
>  
>  typedef u32 kprobe_opcode_t;
> -typedef unsigned long (kprobes_pstate_check_t)(unsigned long);
>  typedef void (kprobes_handler_t) (u32 opcode, long addr, struct pt_regs *);
>  
>  enum pc_restore_type {
> @@ -35,7 +36,7 @@ struct kprobe_pc_restore {
>  /* architecture specific copy of original instruction */
>  struct arch_specific_insn {
>   kprobe_opcode_t *insn;
> - kprobes_pstate_check_t *pstate_cc;
> + pstate_check_t *pstate_cc;
>   kprobes_handler_t *handler;
>   /* restore address after step xol */
>   struct kprobe_pc_restore restore;
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index 8816de2..43bf6cc 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -37,7 +37,8 @@ arm64-obj-$(CONFIG_CPU_PM) += sleep.o suspend.o
>  arm64-obj-$(CONFIG_CPU_IDLE) += cpuidle.o
>  arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o
>  arm64-obj-$(CONFIG_KGDB) += kgdb.o
> -arm64-obj-$(CONFIG_KPROBES) += kprobes.o kprobes-arm64.o
> +arm64-obj-$(CONFIG_KPROBES) += kprobes.o kprobes-arm64.o \
> +   probes-simulate-insn.o
>  arm64-obj-$(CONFIG_EFI) += efi.o efi-entry.stub.o
>  arm64-obj-$(CONFIG_PCI) += pci.o
>  arm64-obj-$(CONFIG_ARMV8_DEPRECATED) += armv8_deprecated.o
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index f79e72e..bb2738c 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -30,6 +30,7 @@
>  #include <asm/cacheflush.h>
>  #include <asm/debug-monitors.h>
>  #include <asm/fixmap.h>
> +#include <asm/opcodes.h>
>  #include <asm/insn.h>
>  
>  #define AARCH64_INSN_SF_BIT BIT(31)
> diff --git a/arch/arm64/kernel/kprobes-arm64.c b/arch/arm64/kernel/kprobes-arm64.c
> index e07727a..487238a 100644
> --- a/arch/arm64/kernel/kprobes-arm64.c
> +++ b/arch/arm64/kernel/kprobes-arm64.c
> @@ -21,6 +21,7 @@
>  #include <asm/sections.h>
>  
>  #include "kprobes-arm64.h"
> +#include "probes-simulate-insn.h"
>  
>  static bool __kprobes aarch64_insn_is_steppable(u32 insn)
>  {
> @@ -62,8 +63,36 @@ arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
>   */
>   if (aarch64_insn_is_steppable(insn))
>   return INSN_GOOD;
> +
> + if (aarch64_insn_is_bcond(insn)) {
> + asi->handler = simulate_b_cond;
> + } else if (aarch64_insn_is_cbz(insn) ||
> +    aarch64_insn_is_cbnz(insn)) {
> + asi->handler = simulate_cbz_cbnz;
> + } else if (aarch64_insn_is_tbz(insn) ||
> +    aarch64_insn_is_tbnz(insn)) {
> + asi->handler = simulate_tbz_tbnz;
> + } else if (aarch64_insn_is_adr_adrp(insn))
> + asi->handler = simulate_adr_adrp;
> + else if (aarch64_insn_is_b(insn) ||
> +    aarch64_insn_is_bl(insn))
> + asi->handler = simulate_b_bl;

For the same codingstyle, we'd better add more "{}" here and below.

thanks
Huang Shijie

> + else if (aarch64_insn_is_br(insn) ||
> +    aarch64_insn_is_blr(insn) ||
> +    aarch64_insn_is_ret(insn))
> + asi->handler = simulate_br_blr_ret;
> + else if (aarch64_insn_is_ldr_lit(insn))
> + asi->handler = simulate_ldr_literal;
> + else if (aarch64_insn_is_ldrsw_lit(insn))
> + asi->handler = simulate_ldrsw_literal;
>   else
> + /*
> + * Instruction cannot be stepped out-of-line and we don't
> + * (yet) simulate it.
> + */
>   return INSN_REJECTED;
> +
> + return INSN_GOOD_NO_SLOT;
>  }

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 01/10] arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature

David Long
In reply to this post by Huang Shijie-4
On 05/17/2016 05:14 AM, Huang Shijie wrote:

> On Wed, Apr 27, 2016 at 02:52:56PM -0400, David Long wrote:
>> From: "David A. Long" <[hidden email]>
>> +/**
>> + * regs_within_kernel_stack() - check the address in the stack
>> + * @regs:      pt_regs which contains kernel stack pointer.
>> + * @addr:      address which is checked.
>> + *
>> + * regs_within_kernel_stack() checks @addr is within the kernel stack page(s).
>> + * If @addr is within the kernel stack, it returns true. If not, returns false.
>> + */
>> +bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr)
>> +{
>> + return ((addr & ~(THREAD_SIZE - 1))  ==
>> + (kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1))) ||
>> + on_irq_stack(addr, raw_smp_processor_id());
>> +}
>> +
>> +/**
>> + * regs_get_kernel_stack_nth() - get Nth entry of the stack
>> + * @regs: pt_regs which contains kernel stack pointer.
>> + * @n: stack entry number.
>> + *
>> + * regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which
>> + * is specified by @regs. If the @n th entry is NOT in the kernel stack,
>> + * this returns 0.
>> + */_
>> +unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
>> +{
>> + unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs);
>> +
>> + addr += n;
>> + if (regs_within_kernel_stack(regs, (unsigned long)addr))
> If the @addr fall in the interrupt stack, the regs_within_kernel_stack()
> will return true. But Is it what we want?
>

Yes, I think it is.  The function is used in regs_get_kernel_stack_nth()
to make sure the data being asked for (based on the pt_regs saved stack
pointer) is actually on the stack, whether it's "kernel" stack or "irq"
stack.

Thanks,
-dl

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 05/10] arm64: Kprobes with single stepping support

David Long
In reply to this post by James Morse
On 05/12/2016 11:01 AM, James Morse wrote:

> Hi David, Sandeepa,
>
> On 27/04/16 19:53, David Long wrote:
>> From: Sandeepa Prabhu <[hidden email]>
>
>> diff --git a/arch/arm64/kernel/kprobes.c b/arch/arm64/kernel/kprobes.c
>> new file mode 100644
>> index 0000000..dfa1b1f
>> --- /dev/null
>> +++ b/arch/arm64/kernel/kprobes.c
>> @@ -0,0 +1,520 @@
>> +/*
>> + * arch/arm64/kernel/kprobes.c
>> + *
>> + * Kprobes support for ARM64
>> + *
>> + * Copyright (C) 2013 Linaro Limited.
>> + * Author: Sandeepa Prabhu <[hidden email]>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> + * General Public License for more details.
>> + *
>> + */
>> +#include <linux/kernel.h>
>> +#include <linux/kprobes.h>
>> +#include <linux/module.h>
>> +#include <linux/slab.h>
>> +#include <linux/stop_machine.h>
>> +#include <linux/stringify.h>
>> +#include <asm/traps.h>
>> +#include <asm/ptrace.h>
>> +#include <asm/cacheflush.h>
>> +#include <asm/debug-monitors.h>
>> +#include <asm/system_misc.h>
>> +#include <asm/insn.h>
>> +#include <asm/uaccess.h>
>> +
>> +#include "kprobes-arm64.h"
>> +
>> +#define MIN_STACK_SIZE(addr) min((unsigned long)MAX_STACK_SIZE, \
>> + (unsigned long)current_thread_info() + THREAD_START_SP - (addr))
>
> What if we probe something called on the irq stack?
> This needs the on_irq_stack() checks too, the start/end can be found from the
> per-cpu irq_stack value.
>
> [ ... ]
>

OK.

>> +int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
>> +{
>> + struct jprobe *jp = container_of(p, struct jprobe, kp);
>> + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>> + long stack_ptr = kernel_stack_pointer(regs);
>> +
>> + kcb->jprobe_saved_regs = *regs;
>> + memcpy(kcb->jprobes_stack, (void *)stack_ptr,
>> +       MIN_STACK_SIZE(stack_ptr));
>
> I wonder if we need this stack save/restore?
>
> The comment next to the equivalent code for x86 says:
>> gcc assumes that the callee owns the argument space and could overwrite it,
>> e.g. tailcall optimization. So, to be absolutely safe we also save and
>> restore enough stack bytes to cover the argument area.
>
> On arm64 the first eight arguments are passed in registers, so we might not need
> this stack copy. (sparc and powerpc work like this too, their versions of this
> function don't copy chunks of the stack).
>
> ... then I went looking for functions with >8 arguments...
>
> Looking at the arm64 defconfig dwarf debug data, there are 71 of these that
> don't get inlined, picking at random:
>> rockchip_clk_register_pll() has 13
>> fib_dump_info() has 11
>> vma_merge() has 10
>> vring_create_virtqueue() has 10
> etc...
>
> So we do need this stack copying, so that we can probe these function without
> risking the arguments being modified.
>
> It may be worth including a comment to the effect that this stack save/restore
> is needed for functions that pass >8 arguments where the pre-handler may change
> these values on the stack.
>
>

I can add a comment.

>> + preempt_enable_no_resched();
>> + return 1;
>> +}
>> +
>
>
> Thanks,
>
> James
>

Thanks,
-dl

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 06/10] arm64: Treat all entry code as non-kprobe-able

David Long
In reply to this post by James Morse
On 05/12/2016 10:49 AM, James Morse wrote:

> Hi David, Pratyush
>
> On 27/04/16 19:53, David Long wrote:
>> From: Pratyush Anand <[hidden email]>
>>
>> Entry symbols are not kprobe safe. So blacklist them for kprobing.
>>
>> Signed-off-by: Pratyush Anand <[hidden email]>
>
>> diff --git a/arch/arm64/kernel/kprobes.c b/arch/arm64/kernel/kprobes.c
>> index dfa1b1f..6a1292b 100644
>> --- a/arch/arm64/kernel/kprobes.c
>> +++ b/arch/arm64/kernel/kprobes.c
>> @@ -29,6 +29,7 @@
>>   #include <asm/system_misc.h>
>>   #include <asm/insn.h>
>>   #include <asm/uaccess.h>
>> +#include <asm-generic/sections.h>
>>
>>   #include "kprobes-arm64.h"
>>
>> @@ -514,6 +515,15 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
>>   return 1;
>>   }
>>
>> +bool arch_within_kprobe_blacklist(unsigned long addr)
>> +{
>> + return  (addr >= (unsigned long)__kprobes_text_start &&
>> + addr < (unsigned long)__kprobes_text_end) ||
>> + (addr >= (unsigned long)__entry_text_start &&
>> + addr < (unsigned long)__entry_text_end) ||
>> + !!search_exception_tables(addr);
>> +}
>> +
>
> Looking at __kvm_hyp_vector, we don't have support for handling breakpoints at
> EL2, so we should forbid kprobing these address ranges too:
> __hyp_text_start -> __hyp_text_end
> __hyp_idmap_text_start -> __hyp_idmap_text_end
>
> These can probably be guarded with is_kernel_in_hyp_mode(), if this is true then
> we are running with VHE where this code runs at the same exception level as the
> rest of the kernel, so we can probe them. (In this case you may want to add
> 'eret' to aarch64_insn_is_branch() in patch 2)
>

OK.

>
> Probing things in the kernel idmap sounds dangerous! Lets blacklist that too:
> __idmap_text_start -> __idmap_text_end
>

OK.

>
>
> Thanks,
>
> James
>

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 06/10] arm64: Treat all entry code as non-kprobe-able

David Long
In reply to this post by James Morse
On 05/12/2016 10:49 AM, James Morse wrote:

> Hi David, Pratyush
>
> On 27/04/16 19:53, David Long wrote:
>> From: Pratyush Anand <[hidden email]>
>>
>> Entry symbols are not kprobe safe. So blacklist them for kprobing.
>>
>> Signed-off-by: Pratyush Anand <[hidden email]>
>
>> diff --git a/arch/arm64/kernel/kprobes.c b/arch/arm64/kernel/kprobes.c
>> index dfa1b1f..6a1292b 100644
>> --- a/arch/arm64/kernel/kprobes.c
>> +++ b/arch/arm64/kernel/kprobes.c
>> @@ -29,6 +29,7 @@
>>   #include <asm/system_misc.h>
>>   #include <asm/insn.h>
>>   #include <asm/uaccess.h>
>> +#include <asm-generic/sections.h>
>>
>>   #include "kprobes-arm64.h"
>>
>> @@ -514,6 +515,15 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
>>   return 1;
>>   }
>>
>> +bool arch_within_kprobe_blacklist(unsigned long addr)
>> +{
>> + return  (addr >= (unsigned long)__kprobes_text_start &&
>> + addr < (unsigned long)__kprobes_text_end) ||
>> + (addr >= (unsigned long)__entry_text_start &&
>> + addr < (unsigned long)__entry_text_end) ||
>> + !!search_exception_tables(addr);
>> +}
>> +
>
> Looking at __kvm_hyp_vector, we don't have support for handling breakpoints at
> EL2, so we should forbid kprobing these address ranges too:
> __hyp_text_start -> __hyp_text_end
> __hyp_idmap_text_start -> __hyp_idmap_text_end
>
> These can probably be guarded with is_kernel_in_hyp_mode(), if this is true then
> we are running with VHE where this code runs at the same exception level as the
> rest of the kernel, so we can probe them. (In this case you may want to add
> 'eret' to aarch64_insn_is_branch() in patch 2)
>
>
> Probing things in the kernel idmap sounds dangerous! Lets blacklist that too:
> __idmap_text_start -> __idmap_text_end
>

I've made these changes.  I noticed there's no include file definitions
for these symbols so I've added local extern declarations in
arch_within_kprobe_blacklist().

Thanks,
-dl


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 05/10] arm64: Kprobes with single stepping support

David Long
In reply to this post by Huang Shijie-4
On 05/17/2016 04:58 AM, Huang Shijie wrote:

> On Wed, Apr 27, 2016 at 02:53:00PM -0400, David Long wrote:
>> +
>> +/*
>> + * Interrupts need to be disabled before single-step mode is set, and not
>> + * reenabled until after single-step mode ends.
>> + * Without disabling interrupt on local CPU, there is a chance of
>> + * interrupt occurrence in the period of exception return and  start of
>> + * out-of-line single-step, that result in wrongly single stepping
>> + * into the interrupt handler.
>> + */
>> +static void __kprobes kprobes_save_local_irqflag(struct pt_regs *regs)
>> +{
>> +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>
> Why not add a parameter for this function to save the @kcb?
>
>> +
>> +     kcb->saved_irqflag = regs->pstate;
>> +     regs->pstate |= PSR_I_BIT;
>> +}
>> +
>> +static void __kprobes kprobes_restore_local_irqflag(struct pt_regs *regs)
>> +{
>> +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> ditto.
>
>> +
>> +     if (kcb->saved_irqflag & PSR_I_BIT)
>> +             regs->pstate |= PSR_I_BIT;
>> +     else
>> +             regs->pstate &= ~PSR_I_BIT;
>> +}
>> +
>> +static void __kprobes
>> +set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr)
>> +{
>> +     kcb->ss_ctx.ss_pending = true;
>> +     kcb->ss_ctx.match_addr = addr + sizeof(kprobe_opcode_t);
>> +}
>> +
>> +static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
>> +{
>> +     kcb->ss_ctx.ss_pending = false;
>> +     kcb->ss_ctx.match_addr = 0;
>> +}
>> +
>> +static void __kprobes setup_singlestep(struct kprobe *p,
>> +                                    struct pt_regs *regs,
>> +                                    struct kprobe_ctlblk *kcb, int reenter)
>> +{
>> +     unsigned long slot;
>> +
>> +     if (reenter) {
>> +             save_previous_kprobe(kcb);
>> +             set_current_kprobe(p);
>> +             kcb->kprobe_status = KPROBE_REENTER;
>> +     } else {
>> +             kcb->kprobe_status = KPROBE_HIT_SS;
>> +     }
>> +
>> +     if (p->ainsn.insn) {
>> +             /* prepare for single stepping */
>> +             slot = (unsigned long)p->ainsn.insn;
>> +
>> +             set_ss_context(kcb, slot);      /* mark pending ss */
>> +
>> +             if (kcb->kprobe_status == KPROBE_REENTER)
>> +                     spsr_set_debug_flag(regs, 0);
>> +
>> +             /* IRQs and single stepping do not mix well. */
>> +             kprobes_save_local_irqflag(regs);
>> +             kernel_enable_single_step(regs);
>> +             instruction_pointer(regs) = slot;
>> +     } else  {
>> +             BUG();
>> +     }
>> +}
>> +
>> +static int __kprobes reenter_kprobe(struct kprobe *p,
>> +                                 struct pt_regs *regs,
>> +                                 struct kprobe_ctlblk *kcb)
>> +{
>> +     switch (kcb->kprobe_status) {
>> +     case KPROBE_HIT_SSDONE:
>> +     case KPROBE_HIT_ACTIVE:
>> +             kprobes_inc_nmissed_count(p);
>> +             setup_singlestep(p, regs, kcb, 1);
>> +             break;
>> +     case KPROBE_HIT_SS:
>> +     case KPROBE_REENTER:
>> +             pr_warn("Unrecoverable kprobe detected at %p.\n", p->addr);
>> +             dump_kprobe(p);
>> +             BUG();
>> +             break;
>> +     default:
>> +             WARN_ON(1);
>> +             return 0;
>> +     }
>> +
>> +     return 1;
>> +}
>> +
>> +static void __kprobes
>> +post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
>> +{
>> +     struct kprobe *cur = kprobe_running();
>> +
>> +     if (!cur)
>> +             return;
>> +
>> +     /* return addr restore if non-branching insn */
>> +     if (cur->ainsn.restore.type == RESTORE_PC) {
>> +             instruction_pointer(regs) = cur->ainsn.restore.addr;
>> +             if (!instruction_pointer(regs))
>> +                     BUG();
>> +     }
>> +
>> +     /* restore back original saved kprobe variables and continue */
>> +     if (kcb->kprobe_status == KPROBE_REENTER) {
>> +             restore_previous_kprobe(kcb);
>> +             return;
>> +     }
>> +     /* call post handler */
>> +     kcb->kprobe_status = KPROBE_HIT_SSDONE;
>> +     if (cur->post_handler)  {
>> +             /* post_handler can hit breakpoint and single step
>> +              * again, so we enable D-flag for recursive exception.
>> +              */
>> +             cur->post_handler(cur, regs, 0);
>> +     }
>> +
>> +     reset_current_kprobe();
>> +}
>> +
>> +int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
>> +{
>> +     struct kprobe *cur = kprobe_running();
>> +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>> +
>> +     switch (kcb->kprobe_status) {
>> +     case KPROBE_HIT_SS:
>> +     case KPROBE_REENTER:
>> +             /*
>> +              * We are here because the instruction being single
>> +              * stepped caused a page fault. We reset the current
>> +              * kprobe and the ip points back to the probe address
>> +              * and allow the page fault handler to continue as a
>> +              * normal page fault.
>> +              */
>> +             instruction_pointer(regs) = (unsigned long)cur->addr;
>> +             if (!instruction_pointer(regs))
>> +                     BUG();
>> +             if (kcb->kprobe_status == KPROBE_REENTER)
>> +                     restore_previous_kprobe(kcb);
>> +             else
>> +                     reset_current_kprobe();
>> +
>> +             break;
>> +     case KPROBE_HIT_ACTIVE:
>> +     case KPROBE_HIT_SSDONE:
>> +             /*
>> +              * We increment the nmissed count for accounting,
>> +              * we can also use npre/npostfault count for accounting
>> +              * these specific fault cases.
>> +              */
>> +             kprobes_inc_nmissed_count(cur);
>> +
>> +             /*
>> +              * We come here because instructions in the pre/post
>> +              * handler caused the page_fault, this could happen
>> +              * if handler tries to access user space by
>> +              * copy_from_user(), get_user() etc. Let the
>> +              * user-specified handler try to fix it first.
>> +              */
>> +             if (cur->fault_handler && cur->fault_handler(cur, regs, fsr))
>> +                     return 1;
>> +
>> +             /*
>> +              * In case the user-specified fault handler returned
>> +              * zero, try to fix up.
>> +              */
>> +             if (fixup_exception(regs))
>> +                     return 1;
>> +     }
>> +     return 0;
>> +}
>> +
>> +int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
>> +                                    unsigned long val, void *data)
>> +{
>> +     return NOTIFY_DONE;
>> +}
>> +
>> +static void __kprobes kprobe_handler(struct pt_regs *regs)
>> +{
>> +     struct kprobe *p, *cur_kprobe;
>> +     struct kprobe_ctlblk *kcb;
>> +     unsigned long addr = instruction_pointer(regs);
>> +
>> +     kcb = get_kprobe_ctlblk();
>> +     cur_kprobe = kprobe_running();
>> +
>> +     p = get_kprobe((kprobe_opcode_t *) addr);
>> +
>> +     if (p) {
>> +             if (cur_kprobe) {
>> +                     if (reenter_kprobe(p, regs, kcb))
>> +                             return;
>> +             } else {
>> +                     /* Probe hit */
>> +                     set_current_kprobe(p);
>> +                     kcb->kprobe_status = KPROBE_HIT_ACTIVE;
>> +
>> +                     /*
>> +                      * If we have no pre-handler or it returned 0, we
>> +                      * continue with normal processing.  If we have a
>> +                      * pre-handler and it returned non-zero, it prepped
>> +                      * for calling the break_handler below on re-entry,
>> +                      * so get out doing nothing more here.
>> +                      *
>> +                      * pre_handler can hit a breakpoint and can step thru
>> +                      * before return, keep PSTATE D-flag enabled until
>> +                      * pre_handler return back.
>> +                      */
>> +                     if (!p->pre_handler || !p->pre_handler(p, regs)) {
>> +                             kcb->kprobe_status = KPROBE_HIT_SS;
> The above line is duplicated.
> You will set KPROBE_HIT_SS in the setup_singlestep.
>
>> +                             setup_singlestep(p, regs, kcb, 0);
>> +                             return;
>> +                     }
>> +             }
>> +     } else if ((le32_to_cpu(*(kprobe_opcode_t *) addr) ==
>> +         BRK64_OPCODE_KPROBES) && cur_kprobe) {
>> +             /* We probably hit a jprobe.  Call its break handler. */
>> +             if (cur_kprobe->break_handler  &&
>> +                  cur_kprobe->break_handler(cur_kprobe, regs)) {
>> +                     kcb->kprobe_status = KPROBE_HIT_SS;
> ditto
>> +                     setup_singlestep(cur_kprobe, regs, kcb, 0);
>> +                     return;
>> +             }
>> +     }
>> +     /*
>> +      * The breakpoint instruction was removed right
>> +      * after we hit it.  Another cpu has removed
>> +      * either a probepoint or a debugger breakpoint
>> +      * at this address.  In either case, no further
>> +      * handling of this interrupt is appropriate.
>> +      * Return back to original instruction, and continue.
>> +      */
>> +}
> thanks
> Huang Shijie
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
>


I've made the above changes for the next version of this patch.

Thanks,
-dl

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 05/10] arm64: Kprobes with single stepping support

David Long
In reply to this post by Masami Hiramatsu-2
On 05/17/2016 11:29 PM, Masami Hiramatsu wrote:

> On Tue, 17 May 2016 16:58:09 +0800
> Huang Shijie <[hidden email]> wrote:
>
>> On Wed, Apr 27, 2016 at 02:53:00PM -0400, David Long wrote:
>>> +
>>> +/*
>>> + * Interrupts need to be disabled before single-step mode is set, and not
>>> + * reenabled until after single-step mode ends.
>>> + * Without disabling interrupt on local CPU, there is a chance of
>>> + * interrupt occurrence in the period of exception return and  start of
>>> + * out-of-line single-step, that result in wrongly single stepping
>>> + * into the interrupt handler.
>>> + */
>>> +static void __kprobes kprobes_save_local_irqflag(struct pt_regs *regs)
>>> +{
>>> +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>>
>> Why not add a parameter for this function to save the @kcb?
>
> Good catch, it should use same kcb of caller.
>

I've made the change for the next version of the patch.

>>
>>> +
>>> +     kcb->saved_irqflag = regs->pstate;
>>> +     regs->pstate |= PSR_I_BIT;
>>> +}
>>> +
>>> +static void __kprobes kprobes_restore_local_irqflag(struct pt_regs *regs)
>>> +{
>>> +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>> ditto.
>>

I've made the change.

>>> +
>>> +     if (kcb->saved_irqflag & PSR_I_BIT)
>>> +             regs->pstate |= PSR_I_BIT;
>>> +     else
>>> +             regs->pstate &= ~PSR_I_BIT;
>>> +}
>>> +
>>> +static void __kprobes
>>> +set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr)
>>> +{
>>> +     kcb->ss_ctx.ss_pending = true;
>>> +     kcb->ss_ctx.match_addr = addr + sizeof(kprobe_opcode_t);
>>> +}
>>> +
>>> +static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
>>> +{
>>> +     kcb->ss_ctx.ss_pending = false;
>>> +     kcb->ss_ctx.match_addr = 0;
>>> +}
>>> +
>>> +static void __kprobes setup_singlestep(struct kprobe *p,
>>> +                                    struct pt_regs *regs,
>>> +                                    struct kprobe_ctlblk *kcb, int reenter)
>>> +{
>>> +     unsigned long slot;
>>> +
>>> +     if (reenter) {
>>> +             save_previous_kprobe(kcb);
>>> +             set_current_kprobe(p);
>>> +             kcb->kprobe_status = KPROBE_REENTER;
>>> +     } else {
>>> +             kcb->kprobe_status = KPROBE_HIT_SS;
>>> +     }
>>> +
>>> +     if (p->ainsn.insn) {
>>> +             /* prepare for single stepping */
>>> +             slot = (unsigned long)p->ainsn.insn;
>>> +
>>> +             set_ss_context(kcb, slot);      /* mark pending ss */
>>> +
>>> +             if (kcb->kprobe_status == KPROBE_REENTER)
>>> +                     spsr_set_debug_flag(regs, 0);
>>> +
>>> +             /* IRQs and single stepping do not mix well. */
>>> +             kprobes_save_local_irqflag(regs);
>>> +             kernel_enable_single_step(regs);
>>> +             instruction_pointer(regs) = slot;
>>> +     } else  {
>>> +             BUG();
>
> You'd better use BUG_ON(!p->ainsn.insn);
>

I have that change ready but the BUG*() is removed entirely in patch
07/10 and the indentation changed back to the above, resulting in more
diffs and the same final code.

>>> +     }
>>> +}
>>> +
>>> +static int __kprobes reenter_kprobe(struct kprobe *p,
>>> +                                 struct pt_regs *regs,
>>> +                                 struct kprobe_ctlblk *kcb)
>>> +{
>>> +     switch (kcb->kprobe_status) {
>>> +     case KPROBE_HIT_SSDONE:
>>> +     case KPROBE_HIT_ACTIVE:
>>> +             kprobes_inc_nmissed_count(p);
>>> +             setup_singlestep(p, regs, kcb, 1);
>>> +             break;
>>> +     case KPROBE_HIT_SS:
>>> +     case KPROBE_REENTER:
>>> +             pr_warn("Unrecoverable kprobe detected at %p.\n", p->addr);
>>> +             dump_kprobe(p);
>>> +             BUG();
>>> +             break;
>>> +     default:
>>> +             WARN_ON(1);
>>> +             return 0;
>>> +     }
>>> +
>>> +     return 1;
>>> +}
>>> +
>>> +static void __kprobes
>>> +post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
>>> +{
>>> +     struct kprobe *cur = kprobe_running();
>>> +
>>> +     if (!cur)
>>> +             return;
>>> +
>>> +     /* return addr restore if non-branching insn */
>>> +     if (cur->ainsn.restore.type == RESTORE_PC) {
>>> +             instruction_pointer(regs) = cur->ainsn.restore.addr;
>>> +             if (!instruction_pointer(regs))
>>> +                     BUG();
>>> +     }
>>> +
>>> +     /* restore back original saved kprobe variables and continue */
>>> +     if (kcb->kprobe_status == KPROBE_REENTER) {
>>> +             restore_previous_kprobe(kcb);
>>> +             return;
>>> +     }
>>> +     /* call post handler */
>>> +     kcb->kprobe_status = KPROBE_HIT_SSDONE;
>>> +     if (cur->post_handler)  {
>>> +             /* post_handler can hit breakpoint and single step
>>> +              * again, so we enable D-flag for recursive exception.
>>> +              */
>>> +             cur->post_handler(cur, regs, 0);
>>> +     }
>>> +
>>> +     reset_current_kprobe();
>>> +}
>>> +
>>> +int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
>>> +{
>>> +     struct kprobe *cur = kprobe_running();
>>> +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>>> +
>>> +     switch (kcb->kprobe_status) {
>>> +     case KPROBE_HIT_SS:
>>> +     case KPROBE_REENTER:
>>> +             /*
>>> +              * We are here because the instruction being single
>>> +              * stepped caused a page fault. We reset the current
>>> +              * kprobe and the ip points back to the probe address
>>> +              * and allow the page fault handler to continue as a
>>> +              * normal page fault.
>>> +              */
>>> +             instruction_pointer(regs) = (unsigned long)cur->addr;
>>> +             if (!instruction_pointer(regs))
>>> +                     BUG();
>
> This can be BUG_ON(!instruction_pointer(regs)).
>

I've made the change.

>>> +             if (kcb->kprobe_status == KPROBE_REENTER)
>>> +                     restore_previous_kprobe(kcb);
>>> +             else
>>> +                     reset_current_kprobe();
>>> +
>>> +             break;
>>> +     case KPROBE_HIT_ACTIVE:
>>> +     case KPROBE_HIT_SSDONE:
>>> +             /*
>>> +              * We increment the nmissed count for accounting,
>>> +              * we can also use npre/npostfault count for accounting
>>> +              * these specific fault cases.
>>> +              */
>>> +             kprobes_inc_nmissed_count(cur);
>>> +
>>> +             /*
>>> +              * We come here because instructions in the pre/post
>>> +              * handler caused the page_fault, this could happen
>>> +              * if handler tries to access user space by
>>> +              * copy_from_user(), get_user() etc. Let the
>>> +              * user-specified handler try to fix it first.
>>> +              */
>>> +             if (cur->fault_handler && cur->fault_handler(cur, regs, fsr))
>>> +                     return 1;
>>> +
>>> +             /*
>>> +              * In case the user-specified fault handler returned
>>> +              * zero, try to fix up.
>>> +              */
>>> +             if (fixup_exception(regs))
>>> +                     return 1;
>>> +     }
>>> +     return 0;
>>> +}
>>> +
>>> +int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
>>> +                                    unsigned long val, void *data)
>>> +{
>>> +     return NOTIFY_DONE;
>>> +}
>>> +
>>> +static void __kprobes kprobe_handler(struct pt_regs *regs)
>>> +{
>>> +     struct kprobe *p, *cur_kprobe;
>>> +     struct kprobe_ctlblk *kcb;
>>> +     unsigned long addr = instruction_pointer(regs);
>>> +
>>> +     kcb = get_kprobe_ctlblk();
>>> +     cur_kprobe = kprobe_running();
>>> +
>>> +     p = get_kprobe((kprobe_opcode_t *) addr);
>>> +
>>> +     if (p) {
>>> +             if (cur_kprobe) {
>>> +                     if (reenter_kprobe(p, regs, kcb))
>>> +                             return;
>>> +             } else {
>>> +                     /* Probe hit */
>>> +                     set_current_kprobe(p);
>>> +                     kcb->kprobe_status = KPROBE_HIT_ACTIVE;
>>> +
>>> +                     /*
>>> +                      * If we have no pre-handler or it returned 0, we
>>> +                      * continue with normal processing.  If we have a
>>> +                      * pre-handler and it returned non-zero, it prepped
>>> +                      * for calling the break_handler below on re-entry,
>>> +                      * so get out doing nothing more here.
>>> +                      *
>>> +                      * pre_handler can hit a breakpoint and can step thru
>>> +                      * before return, keep PSTATE D-flag enabled until
>>> +                      * pre_handler return back.
>>> +                      */
>>> +                     if (!p->pre_handler || !p->pre_handler(p, regs)) {
>>> +                             kcb->kprobe_status = KPROBE_HIT_SS;
>> The above line is duplicated.
>> You will set KPROBE_HIT_SS in the setup_singlestep.
>
> Right.
>

I've removed it.

>>
>>> +                             setup_singlestep(p, regs, kcb, 0);
>>> +                             return;
>>> +                     }
>>> +             }
>>> +     } else if ((le32_to_cpu(*(kprobe_opcode_t *) addr) ==
>>> +         BRK64_OPCODE_KPROBES) && cur_kprobe) {
>>> +             /* We probably hit a jprobe.  Call its break handler. */
>>> +             if (cur_kprobe->break_handler  &&
>>> +                  cur_kprobe->break_handler(cur_kprobe, regs)) {
>>> +                     kcb->kprobe_status = KPROBE_HIT_SS;
>> ditto

I've removed it.

>>> +                     setup_singlestep(cur_kprobe, regs, kcb, 0);
>>> +                     return;
>>> +             }
>>> +     }
>>> +     /*
>>> +      * The breakpoint instruction was removed right
>>> +      * after we hit it.  Another cpu has removed
>>> +      * either a probepoint or a debugger breakpoint
>>> +      * at this address.  In either case, no further
>>> +      * handling of this interrupt is appropriate.
>>> +      * Return back to original instruction, and continue.
>>> +      */
>>> +}
>
> Thanks,
>
>
>

Thanks,
-dl

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [PATCH v12 07/10] arm64: kprobes instruction simulation support

David Long
In reply to this post by Huang Shijie-4
On 05/18/2016 09:52 PM, Huang Shijie wrote:

> On Wed, Apr 27, 2016 at 02:53:02PM -0400, David Long wrote:
>> From: Sandeepa Prabhu <[hidden email]>
>>
>> Kprobes needs simulation of instructions that cannot be stepped
>> from a different memory location, e.g.: those instructions
>> that uses PC-relative addressing. In simulation, the behaviour
>> of the instruction is implemented using a copy of pt_regs.
>>
>> The following instruction categories are simulated:
>>   - All branching instructions(conditional, register, and immediate)
>>   - Literal access instructions(load-literal, adr/adrp)
>>
>> Conditional execution is limited to branching instructions in
>> ARM v8. If conditions at PSTATE do not match the condition fields
>> of opcode, the instruction is effectively NOP.
>>
>> Thanks to Will Cohen for assorted suggested changes.
>>
>> Signed-off-by: Sandeepa Prabhu <[hidden email]>
>> Signed-off-by: William Cohen <[hidden email]>
>> Signed-off-by: David A. Long <[hidden email]>
>> ---
>>   arch/arm64/include/asm/insn.h            |   1 +
>>   arch/arm64/include/asm/probes.h          |   5 +-
>>   arch/arm64/kernel/Makefile               |   3 +-
>>   arch/arm64/kernel/insn.c                 |   1 +
>>   arch/arm64/kernel/kprobes-arm64.c        |  29 ++++
>>   arch/arm64/kernel/kprobes.c              |  32 ++++-
>>   arch/arm64/kernel/probes-simulate-insn.c | 218 +++++++++++++++++++++++++++++++
>>   arch/arm64/kernel/probes-simulate-insn.h |  28 ++++
>>   8 files changed, 311 insertions(+), 6 deletions(-)
>>   create mode 100644 arch/arm64/kernel/probes-simulate-insn.c
>>   create mode 100644 arch/arm64/kernel/probes-simulate-insn.h
>>
>> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
>> index b9567a1..26cee10 100644
>> --- a/arch/arm64/include/asm/insn.h
>> +++ b/arch/arm64/include/asm/insn.h
>> @@ -410,6 +410,7 @@ u32 aarch32_insn_mcr_extract_crm(u32 insn);
>>
>>   typedef bool (pstate_check_t)(unsigned long);
>>   extern pstate_check_t * const opcode_condition_checks[16];
>> +
>>   #endif /* __ASSEMBLY__ */
>>
>>   #endif /* __ASM_INSN_H */
>> diff --git a/arch/arm64/include/asm/probes.h b/arch/arm64/include/asm/probes.h
>> index c5fcbe6..d524f7d 100644
>> --- a/arch/arm64/include/asm/probes.h
>> +++ b/arch/arm64/include/asm/probes.h
>> @@ -15,11 +15,12 @@
>>   #ifndef _ARM_PROBES_H
>>   #define _ARM_PROBES_H
>>
>> +#include <asm/opcodes.h>
>> +
>>   struct kprobe;
>>   struct arch_specific_insn;
>>
>>   typedef u32 kprobe_opcode_t;
>> -typedef unsigned long (kprobes_pstate_check_t)(unsigned long);
>>   typedef void (kprobes_handler_t) (u32 opcode, long addr, struct pt_regs *);
>>
>>   enum pc_restore_type {
>> @@ -35,7 +36,7 @@ struct kprobe_pc_restore {
>>   /* architecture specific copy of original instruction */
>>   struct arch_specific_insn {
>>   kprobe_opcode_t *insn;
>> - kprobes_pstate_check_t *pstate_cc;
>> + pstate_check_t *pstate_cc;
>>   kprobes_handler_t *handler;
>>   /* restore address after step xol */
>>   struct kprobe_pc_restore restore;
>> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
>> index 8816de2..43bf6cc 100644
>> --- a/arch/arm64/kernel/Makefile
>> +++ b/arch/arm64/kernel/Makefile
>> @@ -37,7 +37,8 @@ arm64-obj-$(CONFIG_CPU_PM) += sleep.o suspend.o
>>   arm64-obj-$(CONFIG_CPU_IDLE) += cpuidle.o
>>   arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o
>>   arm64-obj-$(CONFIG_KGDB) += kgdb.o
>> -arm64-obj-$(CONFIG_KPROBES) += kprobes.o kprobes-arm64.o
>> +arm64-obj-$(CONFIG_KPROBES) += kprobes.o kprobes-arm64.o \
>> +   probes-simulate-insn.o
>>   arm64-obj-$(CONFIG_EFI) += efi.o efi-entry.stub.o
>>   arm64-obj-$(CONFIG_PCI) += pci.o
>>   arm64-obj-$(CONFIG_ARMV8_DEPRECATED) += armv8_deprecated.o
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index f79e72e..bb2738c 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -30,6 +30,7 @@
>>   #include <asm/cacheflush.h>
>>   #include <asm/debug-monitors.h>
>>   #include <asm/fixmap.h>
>> +#include <asm/opcodes.h>
>>   #include <asm/insn.h>
>>
>>   #define AARCH64_INSN_SF_BIT BIT(31)
>> diff --git a/arch/arm64/kernel/kprobes-arm64.c b/arch/arm64/kernel/kprobes-arm64.c
>> index e07727a..487238a 100644
>> --- a/arch/arm64/kernel/kprobes-arm64.c
>> +++ b/arch/arm64/kernel/kprobes-arm64.c
>> @@ -21,6 +21,7 @@
>>   #include <asm/sections.h>
>>
>>   #include "kprobes-arm64.h"
>> +#include "probes-simulate-insn.h"
>>
>>   static bool __kprobes aarch64_insn_is_steppable(u32 insn)
>>   {
>> @@ -62,8 +63,36 @@ arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
>>   */
>>   if (aarch64_insn_is_steppable(insn))
>>   return INSN_GOOD;
>> +
>> + if (aarch64_insn_is_bcond(insn)) {
>> + asi->handler = simulate_b_cond;
>> + } else if (aarch64_insn_is_cbz(insn) ||
>> +    aarch64_insn_is_cbnz(insn)) {
>> + asi->handler = simulate_cbz_cbnz;
>> + } else if (aarch64_insn_is_tbz(insn) ||
>> +    aarch64_insn_is_tbnz(insn)) {
>> + asi->handler = simulate_tbz_tbnz;
>> + } else if (aarch64_insn_is_adr_adrp(insn))
>> + asi->handler = simulate_adr_adrp;
>> + else if (aarch64_insn_is_b(insn) ||
>> +    aarch64_insn_is_bl(insn))
>> + asi->handler = simulate_b_bl;
>
> For the same codingstyle, we'd better add more "{}" here and below.
>
> thanks
> Huang Shijie
>
>> + else if (aarch64_insn_is_br(insn) ||
>> +    aarch64_insn_is_blr(insn) ||
>> +    aarch64_insn_is_ret(insn))
>> + asi->handler = simulate_br_blr_ret;
>> + else if (aarch64_insn_is_ldr_lit(insn))
>> + asi->handler = simulate_ldr_literal;
>> + else if (aarch64_insn_is_ldrsw_lit(insn))
>> + asi->handler = simulate_ldrsw_literal;
>>   else
>> + /*
>> + * Instruction cannot be stepped out-of-line and we don't
>> + * (yet) simulate it.
>> + */
>>   return INSN_REJECTED;
>> +
>> + return INSN_GOOD_NO_SLOT;
>>   }
>

I've made this change for the next version of the patch.

Thanks,
-dl

12
Loading...