[RFC v2 0/8] drm: explicit fencing support

classic Classic list List threaded Threaded
47 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Ville Syrjälä-2
On Tue, Apr 26, 2016 at 04:36:36PM +0200, Daniel Vetter wrote:

> On Tue, Apr 26, 2016 at 11:14:22AM -0300, Gustavo Padovan wrote:
> > 2016-04-26 Ville Syrjälä <[hidden email]>:
> >
> > > On Mon, Apr 25, 2016 at 07:33:25PM -0300, Gustavo Padovan wrote:
> > > > From: Gustavo Padovan <[hidden email]>
> > > >
> > > > There is now a new property called FENCE_FD attached to every plane
> > > > state that receives the sync_file fd from userspace via the atomic commit
> > > > IOCTL.
> > >
> > > I still don't like this property abuse. Also with atomic, all passed
> > > fences must be waited upon before anything is done, so attaching them
> > > to planes seems like it might just give people the wrong idea.
> >
> > I'm actually fine with this as property, but another solutions is use
> > an array of {plane, fence_fd} and extend drm_atomic_ioctl args just like
> > we have done for out fences. However the FENCE_FD property is easier to
> > handle in userspace than the array. Any other idea?
>
> Imo FENCE_FD is perfectly fine. But what's the concern around giving
> people the wrong idea with attaching fences to planes? For nonblocking
> commits we need to store them somewhere for the worker, drm_plane_state
> seems like an as good place as any other.

It gives the impression that each plane might flip as soon as its fence
signals.

--
Ville Syrjälä
Intel OTC
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 7/8] drm/fence: add fence timeline to drm_crtc

Ville Syrjälä-2
In reply to this post by Gustavo Padovan-2
On Tue, Apr 26, 2016 at 11:23:06AM -0300, Gustavo Padovan wrote:

> 2016-04-26 Ville Syrjälä <[hidden email]>:
>
> > On Mon, Apr 25, 2016 at 07:33:27PM -0300, Gustavo Padovan wrote:
> > > From: Gustavo Padovan <[hidden email]>
> > >
> > > Create one timeline context for each CRTC to be able to handle out-fences
> > > and signal them. It adds a few members to struct drm_crtc: fence_context,
> > > where we store the context we get from fence_context_alloc(), the
> > > fence seqno and the fence lock, that we pass in fence_init() to be
> > > used by the fence.
> > >
> > > Signed-off-by: Gustavo Padovan <[hidden email]>
> > > ---
> > >  drivers/gpu/drm/drm_crtc.c | 29 +++++++++++++++++++++++++++++
> > >  include/drm/drm_crtc.h     | 19 +++++++++++++++++++
> > >  2 files changed, 48 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
> > > index 65212ce..cf9750a 100644
> > > --- a/drivers/gpu/drm/drm_crtc.c
> > > +++ b/drivers/gpu/drm/drm_crtc.c
> > > @@ -659,6 +659,32 @@ static unsigned int drm_num_crtcs(struct drm_device *dev)
> > >   return num;
> > >  }
> > >  
> > > +static const char *drm_crtc_fence_get_driver_name(struct fence *fence)
> > > +{
> > > + struct drm_crtc *crtc = fence_to_crtc(fence);
> > > +
> > > + return crtc->dev->driver->name;
> > > +}
> > > +
> > > +static const char *drm_crtc_fence_get_timeline_name(struct fence *fence)
> > > +{
> > > + struct drm_crtc *crtc = fence_to_crtc(fence);
> > > +
> > > + return crtc->name;
> > > +}
> >
> > Is that exported to userspace? crtc->name is an internal thing, not
> > meant for outside consumption.
>
> No. However it may be exported via debugfs at some point. Maybe have
> drm_crtc->timeline_name which has the obj_id instead, eg., "drm_crtc19" ?

I'm fine either way if it's an internal thing. So pick whichever makes
people's life easier I guess.

--
Ville Syrjälä
Intel OTC
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Daniel Vetter
In reply to this post by Ville Syrjälä-2
On Tue, Apr 26, 2016 at 07:26:21PM +0300, Ville Syrjälä wrote:

> On Tue, Apr 26, 2016 at 04:36:36PM +0200, Daniel Vetter wrote:
> > On Tue, Apr 26, 2016 at 11:14:22AM -0300, Gustavo Padovan wrote:
> > > 2016-04-26 Ville Syrjälä <[hidden email]>:
> > >
> > > > On Mon, Apr 25, 2016 at 07:33:25PM -0300, Gustavo Padovan wrote:
> > > > > From: Gustavo Padovan <[hidden email]>
> > > > >
> > > > > There is now a new property called FENCE_FD attached to every plane
> > > > > state that receives the sync_file fd from userspace via the atomic commit
> > > > > IOCTL.
> > > >
> > > > I still don't like this property abuse. Also with atomic, all passed
> > > > fences must be waited upon before anything is done, so attaching them
> > > > to planes seems like it might just give people the wrong idea.
> > >
> > > I'm actually fine with this as property, but another solutions is use
> > > an array of {plane, fence_fd} and extend drm_atomic_ioctl args just like
> > > we have done for out fences. However the FENCE_FD property is easier to
> > > handle in userspace than the array. Any other idea?
> >
> > Imo FENCE_FD is perfectly fine. But what's the concern around giving
> > people the wrong idea with attaching fences to planes? For nonblocking
> > commits we need to store them somewhere for the worker, drm_plane_state
> > seems like an as good place as any other.
>
> It gives the impression that each plane might flip as soon as its fence
> signals.

That wouldn't be atomic. Not sure how someone could come up with that
idea. I mean we could move FENCE_FD to the crtc (fence fds can be merged),
but that's just a needless difference to what hwc expects. I think
aligning with the only real-world users in this case here makes sense.

Plus docs in case someone has funny ideas.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Ville Syrjälä-2
On Tue, Apr 26, 2016 at 07:20:49PM +0200, Daniel Vetter wrote:

> On Tue, Apr 26, 2016 at 07:26:21PM +0300, Ville Syrjälä wrote:
> > On Tue, Apr 26, 2016 at 04:36:36PM +0200, Daniel Vetter wrote:
> > > On Tue, Apr 26, 2016 at 11:14:22AM -0300, Gustavo Padovan wrote:
> > > > 2016-04-26 Ville Syrjälä <[hidden email]>:
> > > >
> > > > > On Mon, Apr 25, 2016 at 07:33:25PM -0300, Gustavo Padovan wrote:
> > > > > > From: Gustavo Padovan <[hidden email]>
> > > > > >
> > > > > > There is now a new property called FENCE_FD attached to every plane
> > > > > > state that receives the sync_file fd from userspace via the atomic commit
> > > > > > IOCTL.
> > > > >
> > > > > I still don't like this property abuse. Also with atomic, all passed
> > > > > fences must be waited upon before anything is done, so attaching them
> > > > > to planes seems like it might just give people the wrong idea.
> > > >
> > > > I'm actually fine with this as property, but another solutions is use
> > > > an array of {plane, fence_fd} and extend drm_atomic_ioctl args just like
> > > > we have done for out fences. However the FENCE_FD property is easier to
> > > > handle in userspace than the array. Any other idea?
> > >
> > > Imo FENCE_FD is perfectly fine. But what's the concern around giving
> > > people the wrong idea with attaching fences to planes? For nonblocking
> > > commits we need to store them somewhere for the worker, drm_plane_state
> > > seems like an as good place as any other.
> >
> > It gives the impression that each plane might flip as soon as its fence
> > signals.
>
> That wouldn't be atomic. Not sure how someone could come up with that
> idea.

What else would it mean? It's attached to a specific plane, so why would
it affect other planes?

> I mean we could move FENCE_FD to the crtc (fence fds can be merged),
> but that's just a needless difference to what hwc expects. I think
> aligning with the only real-world users in this case here makes sense.

Well it doesn't belong on the crtc either. I would just stick in the
ioctl as a separate thing, then it's clear it's related to the whole
operation rather than any kms object.

>
> Plus docs in case someone has funny ideas.

Weren't you just quoting rusty's API manifesto recently? ;)

Maybe it was someone else.

--
Ville Syrjälä
Intel OTC
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Daniel Vetter
On Tue, Apr 26, 2016 at 08:40:45PM +0300, Ville Syrjälä wrote:

> On Tue, Apr 26, 2016 at 07:20:49PM +0200, Daniel Vetter wrote:
> > On Tue, Apr 26, 2016 at 07:26:21PM +0300, Ville Syrjälä wrote:
> > > On Tue, Apr 26, 2016 at 04:36:36PM +0200, Daniel Vetter wrote:
> > > > On Tue, Apr 26, 2016 at 11:14:22AM -0300, Gustavo Padovan wrote:
> > > > > 2016-04-26 Ville Syrjälä <[hidden email]>:
> > > > >
> > > > > > On Mon, Apr 25, 2016 at 07:33:25PM -0300, Gustavo Padovan wrote:
> > > > > > > From: Gustavo Padovan <[hidden email]>
> > > > > > >
> > > > > > > There is now a new property called FENCE_FD attached to every plane
> > > > > > > state that receives the sync_file fd from userspace via the atomic commit
> > > > > > > IOCTL.
> > > > > >
> > > > > > I still don't like this property abuse. Also with atomic, all passed
> > > > > > fences must be waited upon before anything is done, so attaching them
> > > > > > to planes seems like it might just give people the wrong idea.
> > > > >
> > > > > I'm actually fine with this as property, but another solutions is use
> > > > > an array of {plane, fence_fd} and extend drm_atomic_ioctl args just like
> > > > > we have done for out fences. However the FENCE_FD property is easier to
> > > > > handle in userspace than the array. Any other idea?
> > > >
> > > > Imo FENCE_FD is perfectly fine. But what's the concern around giving
> > > > people the wrong idea with attaching fences to planes? For nonblocking
> > > > commits we need to store them somewhere for the worker, drm_plane_state
> > > > seems like an as good place as any other.
> > >
> > > It gives the impression that each plane might flip as soon as its fence
> > > signals.
> >
> > That wouldn't be atomic. Not sure how someone could come up with that
> > idea.
>
> What else would it mean? It's attached to a specific plane, so why would
> it affect other planes?
>
> > I mean we could move FENCE_FD to the crtc (fence fds can be merged),
> > but that's just a needless difference to what hwc expects. I think
> > aligning with the only real-world users in this case here makes sense.
>
> Well it doesn't belong on the crtc either. I would just stick in the
> ioctl as a separate thing, then it's clear it's related to the whole
> operation rather than any kms object.

We want it per-crtc I'd say, so that you could flip each crtc
individually. But really the reason for per-plane is hw composer from
Android. I don't see any point in designing an api that's needlessly
different from what the main user expects (even if it may be silly).

The other bit is that for implicit syncing you need one fence per fb/plane
anyway, so this also fits nicely on the driver side I think.

> > Plus docs in case someone has funny ideas.
>
> Weren't you just quoting rusty's API manifesto recently? ;)

I quote it all the time.

http://sweng.the-davies.net/Home/rustys-api-design-manifesto

I think current interface is scoring pretty high since as part of
Gustavo's work there's no also a new atomic helper which will get the
waiting right. There's still the problem that neither for drm_event nor
the fence do we have anything idiot-proof. So for that the solution is
testcases (which are also happening).
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Ville Syrjälä-2
On Tue, Apr 26, 2016 at 08:23:46PM +0200, Daniel Vetter wrote:

> On Tue, Apr 26, 2016 at 08:40:45PM +0300, Ville Syrjälä wrote:
> > On Tue, Apr 26, 2016 at 07:20:49PM +0200, Daniel Vetter wrote:
> > > On Tue, Apr 26, 2016 at 07:26:21PM +0300, Ville Syrjälä wrote:
> > > > On Tue, Apr 26, 2016 at 04:36:36PM +0200, Daniel Vetter wrote:
> > > > > On Tue, Apr 26, 2016 at 11:14:22AM -0300, Gustavo Padovan wrote:
> > > > > > 2016-04-26 Ville Syrjälä <[hidden email]>:
> > > > > >
> > > > > > > On Mon, Apr 25, 2016 at 07:33:25PM -0300, Gustavo Padovan wrote:
> > > > > > > > From: Gustavo Padovan <[hidden email]>
> > > > > > > >
> > > > > > > > There is now a new property called FENCE_FD attached to every plane
> > > > > > > > state that receives the sync_file fd from userspace via the atomic commit
> > > > > > > > IOCTL.
> > > > > > >
> > > > > > > I still don't like this property abuse. Also with atomic, all passed
> > > > > > > fences must be waited upon before anything is done, so attaching them
> > > > > > > to planes seems like it might just give people the wrong idea.
> > > > > >
> > > > > > I'm actually fine with this as property, but another solutions is use
> > > > > > an array of {plane, fence_fd} and extend drm_atomic_ioctl args just like
> > > > > > we have done for out fences. However the FENCE_FD property is easier to
> > > > > > handle in userspace than the array. Any other idea?
> > > > >
> > > > > Imo FENCE_FD is perfectly fine. But what's the concern around giving
> > > > > people the wrong idea with attaching fences to planes? For nonblocking
> > > > > commits we need to store them somewhere for the worker, drm_plane_state
> > > > > seems like an as good place as any other.
> > > >
> > > > It gives the impression that each plane might flip as soon as its fence
> > > > signals.
> > >
> > > That wouldn't be atomic. Not sure how someone could come up with that
> > > idea.
> >
> > What else would it mean? It's attached to a specific plane, so why would
> > it affect other planes?
> >
> > > I mean we could move FENCE_FD to the crtc (fence fds can be merged),
> > > but that's just a needless difference to what hwc expects. I think
> > > aligning with the only real-world users in this case here makes sense.
> >
> > Well it doesn't belong on the crtc either. I would just stick in the
> > ioctl as a separate thing, then it's clear it's related to the whole
> > operation rather than any kms object.
>
> We want it per-crtc I'd say, so that you could flip each crtc
> individually.

Then you could just issue multiple ioctls. For eg. those nasty 4k MST
display (or just otherwise neatly synced displayes) you want to wait for
all the fences upfront and the flip everything at once, otherwise you'll
get nasty tears at the seams.

> But really the reason for per-plane is hw composer from
> Android. I don't see any point in designing an api that's needlessly
> different from what the main user expects (even if it may be silly).

What are they doing that can't stuff the fences into an array
instead of props?

>
> The other bit is that for implicit syncing you need one fence per fb/plane
> anyway, so this also fits nicely on the driver side I think.
>
> > > Plus docs in case someone has funny ideas.
> >
> > Weren't you just quoting rusty's API manifesto recently? ;)
>
> I quote it all the time.
>
> http://sweng.the-davies.net/Home/rustys-api-design-manifesto
>
> I think current interface is scoring pretty high since as part of
> Gustavo's work there's no also a new atomic helper which will get the
> waiting right. There's still the problem that neither for drm_event nor
> the fence do we have anything idiot-proof. So for that the solution is
> testcases (which are also happening).
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

--
Ville Syrjälä
Intel OTC
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Daniel Vetter
On Tue, Apr 26, 2016 at 09:55:06PM +0300, Ville Syrjälä wrote:

> On Tue, Apr 26, 2016 at 08:23:46PM +0200, Daniel Vetter wrote:
> > On Tue, Apr 26, 2016 at 08:40:45PM +0300, Ville Syrjälä wrote:
> > > On Tue, Apr 26, 2016 at 07:20:49PM +0200, Daniel Vetter wrote:
> > > > On Tue, Apr 26, 2016 at 07:26:21PM +0300, Ville Syrjälä wrote:
> > > > > On Tue, Apr 26, 2016 at 04:36:36PM +0200, Daniel Vetter wrote:
> > > > > > On Tue, Apr 26, 2016 at 11:14:22AM -0300, Gustavo Padovan wrote:
> > > > > > > 2016-04-26 Ville Syrjälä <[hidden email]>:
> > > > > > >
> > > > > > > > On Mon, Apr 25, 2016 at 07:33:25PM -0300, Gustavo Padovan wrote:
> > > > > > > > > From: Gustavo Padovan <[hidden email]>
> > > > > > > > >
> > > > > > > > > There is now a new property called FENCE_FD attached to every plane
> > > > > > > > > state that receives the sync_file fd from userspace via the atomic commit
> > > > > > > > > IOCTL.
> > > > > > > >
> > > > > > > > I still don't like this property abuse. Also with atomic, all passed
> > > > > > > > fences must be waited upon before anything is done, so attaching them
> > > > > > > > to planes seems like it might just give people the wrong idea.
> > > > > > >
> > > > > > > I'm actually fine with this as property, but another solutions is use
> > > > > > > an array of {plane, fence_fd} and extend drm_atomic_ioctl args just like
> > > > > > > we have done for out fences. However the FENCE_FD property is easier to
> > > > > > > handle in userspace than the array. Any other idea?
> > > > > >
> > > > > > Imo FENCE_FD is perfectly fine. But what's the concern around giving
> > > > > > people the wrong idea with attaching fences to planes? For nonblocking
> > > > > > commits we need to store them somewhere for the worker, drm_plane_state
> > > > > > seems like an as good place as any other.
> > > > >
> > > > > It gives the impression that each plane might flip as soon as its fence
> > > > > signals.
> > > >
> > > > That wouldn't be atomic. Not sure how someone could come up with that
> > > > idea.
> > >
> > > What else would it mean? It's attached to a specific plane, so why would
> > > it affect other planes?
> > >
> > > > I mean we could move FENCE_FD to the crtc (fence fds can be merged),
> > > > but that's just a needless difference to what hwc expects. I think
> > > > aligning with the only real-world users in this case here makes sense.
> > >
> > > Well it doesn't belong on the crtc either. I would just stick in the
> > > ioctl as a separate thing, then it's clear it's related to the whole
> > > operation rather than any kms object.
> >
> > We want it per-crtc I'd say, so that you could flip each crtc
> > individually.
>
> Then you could just issue multiple ioctls. For eg. those nasty 4k MST
> display (or just otherwise neatly synced displayes) you want to wait for
> all the fences upfront and the flip everything at once, otherwise you'll
> get nasty tears at the seams.
>
> > But really the reason for per-plane is hw composer from
> > Android. I don't see any point in designing an api that's needlessly
> > different from what the main user expects (even if it may be silly).
>
> What are they doing that can't stuff the fences into an array
> instead of props?

The hw composer interface is one in-fence per plane. That's really the
major reason why the kernel interface is built to match. And I really
don't think we should diverge just because we have a slight different
color preference ;-)

As long as you end up with a pile of fences somehow it'll work.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Greg Hackmann
On 04/26/2016 01:05 PM, Daniel Vetter wrote:

> On Tue, Apr 26, 2016 at 09:55:06PM +0300, Ville Syrjälä wrote:
>> On Tue, Apr 26, 2016 at 08:23:46PM +0200, Daniel Vetter wrote:
>>> On Tue, Apr 26, 2016 at 08:40:45PM +0300, Ville Syrjälä wrote:
>>> But really the reason for per-plane is hw composer from
>>> Android. I don't see any point in designing an api that's needlessly
>>> different from what the main user expects (even if it may be silly).
>>
>> What are they doing that can't stuff the fences into an array
>> instead of props?
>
> The hw composer interface is one in-fence per plane. That's really the
> major reason why the kernel interface is built to match. And I really
> don't think we should diverge just because we have a slight different
> color preference ;-)
>
> As long as you end up with a pile of fences somehow it'll work.
> -Daniel
>

The relationship between layers and fences is only fuzzy and indirect
though.  The relationship is really between the buffer you're displaying
on that layer, and the fence representing the work done to render into
that buffer.  SurfaceFlinger just happens to bundle them together inside
the same struct hwc_layer_1 as an API convenience.

Which is kind of splitting hairs as long as you have a 1-to-1
relationship between layers and DRM planes.  But that's not always the case.

A (per-CRTC?) array of fences would be more flexible.  And even in the
cases where you could make a 1-to-1 mapping between planes and fences,
it's not that much more work for userspace to assemble those fences into
an array anyway.
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 1/8] dma-buf/fence: add fence_collection fences

Daniel Vetter
In reply to this post by Gustavo Padovan-2
On Tue, Apr 26, 2016 at 12:02:08PM -0300, Gustavo Padovan wrote:

> 2016-04-26 Daniel Vetter <[hidden email]>:
>
> > On Mon, Apr 25, 2016 at 07:33:21PM -0300, Gustavo Padovan wrote:
> > > From: Gustavo Padovan <[hidden email]>
> > >
> > > struct fence_collection inherits from struct fence and carries a
> > > collection of fences that needs to be waited together.
> > >
> > > It is useful to translate a sync_file to a fence to remove the complexity
> > > of dealing with sync_files on DRM drivers. So even if there are many
> > > fences in the sync_file that needs to waited for a commit to happen,
> > > they all get added to the fence_collection and passed for DRM use as
> > > a standard struct fence.
> > >
> > > That means that no changes needed to any driver besides supporting fences.
> > >
> > > fence_collection's fence doesn't belong to any timeline context, so
> > > fence_is_later() and fence_later() are not meant to be called with
> > > fence_collections fences.
> > >
> > > v2: Comments by Daniel Vetter:
> > > - merge fence_collection_init() and fence_collection_add()
> > > - only add callbacks at ->enable_signalling()
> > > - remove fence_collection_put()
> > > - check for type on to_fence_collection()
> > > - adjust fence_is_later() and fence_later() to WARN_ON() if they
> > > are used with collection fences.
> > >
> > > Signed-off-by: Gustavo Padovan <[hidden email]>
> >
> > FENCE_NO_CONTEXT semantics needs an ack from amdgpu maintainers. I'm not
> > entirely sure they might not hit the new WARN_ON by accident now. Please
> > cc Alex Deucher & Christian König.
>
> Sure, I'll Cc then in the revision. But if they use
> fence_context_alloc() to get the context they should never hit any
> WARN_ON as context numbers now starts at 1. 0 is reserved for
> FENCE_NO_CONTEXT.

I was more concerned whether the codepaths could accidentally walk over
non-amdgpu fences (through prime buffer sharing for example). Otoh that
would be a preexisting bug I think ...
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Daniel Vetter
In reply to this post by Greg Hackmann
On Tue, Apr 26, 2016 at 01:48:02PM -0700, Greg Hackmann wrote:

> On 04/26/2016 01:05 PM, Daniel Vetter wrote:
> >On Tue, Apr 26, 2016 at 09:55:06PM +0300, Ville Syrjälä wrote:
> >>On Tue, Apr 26, 2016 at 08:23:46PM +0200, Daniel Vetter wrote:
> >>>On Tue, Apr 26, 2016 at 08:40:45PM +0300, Ville Syrjälä wrote:
> >>>But really the reason for per-plane is hw composer from
> >>>Android. I don't see any point in designing an api that's needlessly
> >>>different from what the main user expects (even if it may be silly).
> >>
> >>What are they doing that can't stuff the fences into an array
> >>instead of props?
> >
> >The hw composer interface is one in-fence per plane. That's really the
> >major reason why the kernel interface is built to match. And I really
> >don't think we should diverge just because we have a slight different
> >color preference ;-)
> >
> >As long as you end up with a pile of fences somehow it'll work.
> >-Daniel
> >
>
> The relationship between layers and fences is only fuzzy and indirect
> though.  The relationship is really between the buffer you're displaying on
> that layer, and the fence representing the work done to render into that
> buffer.  SurfaceFlinger just happens to bundle them together inside the same
> struct hwc_layer_1 as an API convenience.
>
> Which is kind of splitting hairs as long as you have a 1-to-1 relationship
> between layers and DRM planes.  But that's not always the case.
>
> A (per-CRTC?) array of fences would be more flexible.  And even in the cases
> where you could make a 1-to-1 mapping between planes and fences, it's not
> that much more work for userspace to assemble those fences into an array
> anyway.

I'm ok with an array too if that's what you folks prefer (it's meant to be
used by you after all). I just don't want just 1 fence for the entire op,
forcing userspace to first merge them all together. That seems silly.

One side-effect of that is that we'd also have to rework all the internal
bits and move fences around in atomic. Which means change a pile of
drivers. Not sure that's worth it, but I'd be ok either way really.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Daniel Stone
In reply to this post by Greg Hackmann
Hi,

On 26 April 2016 at 21:48, Greg Hackmann <[hidden email]> wrote:

> On 04/26/2016 01:05 PM, Daniel Vetter wrote:
>> On Tue, Apr 26, 2016 at 09:55:06PM +0300, Ville Syrjälä wrote:
>>> What are they doing that can't stuff the fences into an array
>>> instead of props?
>>
>> The hw composer interface is one in-fence per plane. That's really the
>> major reason why the kernel interface is built to match. And I really
>> don't think we should diverge just because we have a slight different
>> color preference ;-)
>
> The relationship between layers and fences is only fuzzy and indirect
> though.  The relationship is really between the buffer you're displaying on
> that layer, and the fence representing the work done to render into that
> buffer.  SurfaceFlinger just happens to bundle them together inside the same
> struct hwc_layer_1 as an API convenience.

Right, and when using implicit fencing, this comes as a plane
property, by virtue of plane -> fb -> buffer -> fence.

> Which is kind of splitting hairs as long as you have a 1-to-1 relationship
> between layers and DRM planes.  But that's not always the case.

Can you please elaborate?

> A (per-CRTC?) array of fences would be more flexible.  And even in the cases
> where you could make a 1-to-1 mapping between planes and fences, it's not
> that much more work for userspace to assemble those fences into an array
> anyway.

As Ville says, I don't want to go down the path of scheduling CRTC
updates separately, because that breaks MST pretty badly. If you don't
want your updates to display atomically, then don't schedule them
atomically ... ? That's the only reason I can see for making fencing
per-CRTC, rather than just a pile of unassociated fences appended to
the request. Per-CRTC fences also forces userspace to merge fences
before submission when using multiple planes per CRTC, which is pretty
punitive.

I think having it semantically attached to the plane is a little bit
nicer for tracing (why was this request delayed? -> a fence -> which
buffer was that fence for?) at a glance. Also the 'pile of appended
fences' model is a bit awkward for more generic userspace, which
creates a libdrm request and builds it (add a plane, try it out, wind
back) incrementally. Using properties makes that really easy, but
without properties, we'd have to add separate codepaths - and thus
separate ABI, which complicates distribution - to libdrm to account
for a separate plane array which shares a cursor with the properties.
So for that reason if none other, I'd really prefer not to go down
that route.

Cheers,
Daniel
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 7/8] drm/fence: add fence timeline to drm_crtc

Daniel Stone
In reply to this post by Gustavo Padovan
Hi,

On 26 April 2016 at 00:33, Gustavo Padovan <[hidden email]> wrote:
> +static inline struct drm_crtc *fence_to_crtc(struct fence *fence)
> +{
> +       if (fence->ops != &drm_crtc_fence_ops)
> +               return NULL;

Since this is (currently) only used before unconditional dereferences,
maybe turn this into a BUG_ON instead of return NULL?

Cheers,
Daniel
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Gustavo Padovan
In reply to this post by Daniel Stone
2016-04-27 Daniel Stone <[hidden email]>:

> Hi,
>
> On 26 April 2016 at 21:48, Greg Hackmann <[hidden email]> wrote:
> > On 04/26/2016 01:05 PM, Daniel Vetter wrote:
> >> On Tue, Apr 26, 2016 at 09:55:06PM +0300, Ville Syrjälä wrote:
> >>> What are they doing that can't stuff the fences into an array
> >>> instead of props?
> >>
> >> The hw composer interface is one in-fence per plane. That's really the
> >> major reason why the kernel interface is built to match. And I really
> >> don't think we should diverge just because we have a slight different
> >> color preference ;-)
> >
> > The relationship between layers and fences is only fuzzy and indirect
> > though.  The relationship is really between the buffer you're displaying on
> > that layer, and the fence representing the work done to render into that
> > buffer.  SurfaceFlinger just happens to bundle them together inside the same
> > struct hwc_layer_1 as an API convenience.
>
> Right, and when using implicit fencing, this comes as a plane
> property, by virtue of plane -> fb -> buffer -> fence.
>
> > Which is kind of splitting hairs as long as you have a 1-to-1 relationship
> > between layers and DRM planes.  But that's not always the case.
>
> Can you please elaborate?
>
> > A (per-CRTC?) array of fences would be more flexible.  And even in the cases
> > where you could make a 1-to-1 mapping between planes and fences, it's not
> > that much more work for userspace to assemble those fences into an array
> > anyway.
>
> As Ville says, I don't want to go down the path of scheduling CRTC
> updates separately, because that breaks MST pretty badly. If you don't
> want your updates to display atomically, then don't schedule them
> atomically ... ? That's the only reason I can see for making fencing
> per-CRTC, rather than just a pile of unassociated fences appended to
> the request. Per-CRTC fences also forces userspace to merge fences
> before submission when using multiple planes per CRTC, which is pretty
> punitive.
>
> I think having it semantically attached to the plane is a little bit
> nicer for tracing (why was this request delayed? -> a fence -> which
> buffer was that fence for?) at a glance. Also the 'pile of appended
> fences' model is a bit awkward for more generic userspace, which
> creates a libdrm request and builds it (add a plane, try it out, wind
> back) incrementally. Using properties makes that really easy, but
> without properties, we'd have to add separate codepaths - and thus
> separate ABI, which complicates distribution - to libdrm to account
> for a separate plane array which shares a cursor with the properties.
> So for that reason if none other, I'd really prefer not to go down
> that route.

I also agree to have it as FENCE_FD prop on the plane. Summarizing the
arguments on this thread, they are:

 - implicit fences also needs one fence per plane/fb, so it will be good to    
   match with that.                                                            
 - requires userspace to always merge fences                                    
 - can use standard plane properties, making kernel and userspace life easier,  
   an array brings more work to build the atomic request plus extra checkings  
   on the kernel.                                                              
 - do not need to changes to drivers                                            
 - better for tracing, can identify the buffer/fence promptly

        Gustavo
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Daniel Vetter
On Thu, Apr 28, 2016 at 11:36:44AM -0300, Gustavo Padovan wrote:

> 2016-04-27 Daniel Stone <[hidden email]>:
>
> > Hi,
> >
> > On 26 April 2016 at 21:48, Greg Hackmann <[hidden email]> wrote:
> > > On 04/26/2016 01:05 PM, Daniel Vetter wrote:
> > >> On Tue, Apr 26, 2016 at 09:55:06PM +0300, Ville Syrjälä wrote:
> > >>> What are they doing that can't stuff the fences into an array
> > >>> instead of props?
> > >>
> > >> The hw composer interface is one in-fence per plane. That's really the
> > >> major reason why the kernel interface is built to match. And I really
> > >> don't think we should diverge just because we have a slight different
> > >> color preference ;-)
> > >
> > > The relationship between layers and fences is only fuzzy and indirect
> > > though.  The relationship is really between the buffer you're displaying on
> > > that layer, and the fence representing the work done to render into that
> > > buffer.  SurfaceFlinger just happens to bundle them together inside the same
> > > struct hwc_layer_1 as an API convenience.
> >
> > Right, and when using implicit fencing, this comes as a plane
> > property, by virtue of plane -> fb -> buffer -> fence.
> >
> > > Which is kind of splitting hairs as long as you have a 1-to-1 relationship
> > > between layers and DRM planes.  But that's not always the case.
> >
> > Can you please elaborate?
> >
> > > A (per-CRTC?) array of fences would be more flexible.  And even in the cases
> > > where you could make a 1-to-1 mapping between planes and fences, it's not
> > > that much more work for userspace to assemble those fences into an array
> > > anyway.
> >
> > As Ville says, I don't want to go down the path of scheduling CRTC
> > updates separately, because that breaks MST pretty badly. If you don't
> > want your updates to display atomically, then don't schedule them
> > atomically ... ? That's the only reason I can see for making fencing
> > per-CRTC, rather than just a pile of unassociated fences appended to
> > the request. Per-CRTC fences also forces userspace to merge fences
> > before submission when using multiple planes per CRTC, which is pretty
> > punitive.
> >
> > I think having it semantically attached to the plane is a little bit
> > nicer for tracing (why was this request delayed? -> a fence -> which
> > buffer was that fence for?) at a glance. Also the 'pile of appended
> > fences' model is a bit awkward for more generic userspace, which
> > creates a libdrm request and builds it (add a plane, try it out, wind
> > back) incrementally. Using properties makes that really easy, but
> > without properties, we'd have to add separate codepaths - and thus
> > separate ABI, which complicates distribution - to libdrm to account
> > for a separate plane array which shares a cursor with the properties.
> > So for that reason if none other, I'd really prefer not to go down
> > that route.
>
> I also agree to have it as FENCE_FD prop on the plane. Summarizing the
> arguments on this thread, they are:
>
>  - implicit fences also needs one fence per plane/fb, so it will be good to    
>    match with that.                                                            
>  - requires userspace to always merge fences                                    

"does not require" I presume?

>  - can use standard plane properties, making kernel and userspace life easier,  
>    an array brings more work to build the atomic request plus extra checkings  
>    on the kernel.                                                              
>  - do not need to changes to drivers                                            
>  - better for tracing, can identify the buffer/fence promptly

- Fits in well with the libdrm atomic rollback support - no need to manage
  fences separately when incrementally building an atomic commit.
 
>
> Gustavo
> _______________________________________________
> dri-devel mailing list
> [hidden email]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 1/8] dma-buf/fence: add fence_collection fences

Gustavo Padovan
In reply to this post by Chris Wilson
2016-04-26 Chris Wilson <[hidden email]>:

> On Mon, Apr 25, 2016 at 07:33:21PM -0300, Gustavo Padovan wrote:
> > +static const char *fence_collection_get_timeline_name(struct fence *fence)
> > +{
> > + return "no context";
>
> "unbound" to distinguish from fence contexts within a timeline?
>
> > +static bool fence_collection_enable_signaling(struct fence *fence)
> > +{
> > + struct fence_collection *collection = to_fence_collection(fence);
> > + int i;
> > +
> > + for (i = 0 ; i < collection->num_fences ; i++) {
> > + if (fence_add_callback(collection->fences[i].fence,
> > +       &collection->fences[i].cb,
> > +       collection_check_cb_func)) {
> > + atomic_dec(&collection->num_pending_fences);
> > + return false;
>
> Don't stop, we need to enable all the others!
>
> > + }
> > + }
> > +
> > + return !!atomic_read(&collection->num_pending_fences);
>
> Redundant !!
>
> > +}
> > +
> > +static bool fence_collection_signaled(struct fence *fence)
> > +{
> > + struct fence_collection *collection = to_fence_collection(fence);
> > +
> > + return (atomic_read(&collection->num_pending_fences) == 0);
>
> Redundant ()
>
> > +static signed long fence_collection_wait(struct fence *fence, bool intr,
> > + signed long timeout)
> > +{
>
> What advantage does this have over fence_default_wait? You enable
> signaling on all, then wait sequentially. The code looks redundant and
> could just use fence_default_wait instead.

None actually, I'll just replace it with fence_default_wait().

        Gustavo
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 8/8] drm/fence: add out-fences support

Gustavo Padovan
In reply to this post by Daniel Vetter
2016-04-26 Daniel Vetter <[hidden email]>:

> On Mon, Apr 25, 2016 at 07:33:28PM -0300, Gustavo Padovan wrote:
> > From: Gustavo Padovan <[hidden email]>
> >
> > Support DRM out-fences creating a sync_file with a fence for each crtc
> > update with the DRM_MODE_ATOMIC_OUT_FENCE flag.
> >
> > We then send an struct drm_out_fences array with the out-fences fds back in
> > the drm_atomic_ioctl() as an out arg in the out_fences_ptr field.
> >
> > struct drm_out_fences {
> > __u32   crtc_id;
> > __u32   fd;
> > };
> >
> > v2: Comment by Rob Clark:
> > - Squash commit that adds DRM_MODE_ATOMIC_OUT_FENCE flag here.
> >
> >     Comment by Daniel Vetter:
> > - Add clean up code for out_fences
> >
> > Signed-off-by: Gustavo Padovan <[hidden email]>
> > ---
> >  drivers/gpu/drm/drm_atomic.c | 163 +++++++++++++++++++++++++++++++++++++++++--
> >  include/drm/drm_crtc.h       |  10 +++
> >  include/uapi/drm/drm_mode.h  |  11 ++-
> >  3 files changed, 179 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
> > index 5f9d434..06c6007 100644
> > --- a/drivers/gpu/drm/drm_atomic.c
> > +++ b/drivers/gpu/drm/drm_atomic.c
> > @@ -1566,6 +1566,133 @@ void drm_atomic_clean_old_fb(struct drm_device *dev,
> >  }
> >  EXPORT_SYMBOL(drm_atomic_clean_old_fb);
> >  
> > +static struct drm_out_fence_state *get_out_fence(struct drm_device *dev,
> > + struct drm_atomic_state *state,
> > + uint32_t __user *out_fences_ptr,
> > + uint64_t count_out_fences,
> > + uint64_t user_data)
> > +{
> > + struct drm_crtc *crtc;
> > + struct drm_crtc_state *crtc_state;
> > + struct drm_out_fences *out_fences;
> > + struct drm_out_fence_state *fence_state;
> > + int num_fences = 0;
> > + int i, ret;
> > +
> > + if (count_out_fences > dev->mode_config.num_crtc)
> > + return ERR_PTR(-EINVAL);
> > +
> > + out_fences = kcalloc(count_out_fences, sizeof(*out_fences),
> > +     GFP_KERNEL);
> > + if (!out_fences)
> > + return ERR_PTR(-ENOMEM);
>
> A bit tricky, but the above kcalloc is the only thing that catches integer
> overflows in count_out_fences. Needs a comment imo since this could be a
> security exploit if we accidentally screw it up.

The check above makes sure that count_out_fences is not bigger than
num_crtc. Don't that fix this?

>
> Also needs a testcase imo.
>
> > +
> > + fence_state = kcalloc(count_out_fences, sizeof(*fence_state),
> > +     GFP_KERNEL);
> > + if (!fence_state) {
> > + kfree(out_fences);
> > + return ERR_PTR(-ENOMEM);
> > + }
> > +
> > + for (i = 0 ; i < count_out_fences ; i++)
> > + fence_state[i].fd = -1;
> > +
> > + for_each_crtc_in_state(state, crtc, crtc_state, i) {
> > + struct drm_pending_vblank_event *e;
> > + struct fence *fence;
> > + char name[32];
> > +
> > + fence = kzalloc(sizeof(*fence), GFP_KERNEL);
> > + if (!fence) {
> > + ret = -ENOMEM;
> > + goto out;
> > + }
> > +
> > + fence_init(fence, &drm_crtc_fence_ops, &crtc->fence_lock,
> > +   crtc->fence_context, crtc->fence_seqno);
> > +
> > + snprintf(name, sizeof(name), "crtc-%d_%lu",
> > + drm_crtc_index(crtc), crtc->fence_seqno++);
>
> Hm ... fence_init_with_name? I'm kinda confused why we only name fences
> that are exported though, and why not all of them. Debugging fence
> deadlocks is real hard, so giving them all names might be a good idea.
>
> Anyway, seems like more room for a bit more sync_file/struct fence
> merging.

We just removed name from sync_file_create() so snprintf() is not even
necessary here anymore.

>
> > +
> > + fence_state[i].fd = get_unused_fd_flags(O_CLOEXEC);
> > + if (fence_state[i].fd < 0) {
> > + fence_put(fence);
> > + ret = fence_state[i].fd;
> > + goto out;
> > + }
> > +
> > + fence_state[i].sync_file = sync_file_create(name, fence);
> > + if(!fence_state[i].sync_file) {
> > + fence_put(fence);
> > + ret = -ENOMEM;
> > + goto out;
> > + }
> > +
> > + if (crtc_state->event) {
> > + crtc_state->event->base.fence = fence;
> > + } else {
>
> This looks a bit funny - I'd change the create event logic to create an
> event either if we have the either drm event or out-fence flag set.

Ok.

>
> > + e = create_vblank_event(dev, NULL, fence, user_data);
> > + if (!e) {
> > + ret = -ENOMEM;
> > + goto out;
> > + }
> > +
> > + crtc_state->event = e;
> > + }
> > +
> > + out_fences[num_fences].crtc_id = crtc->base.id;
> > + out_fences[num_fences].fd = fence_state[i].fd;
> > + num_fences++;
> > + }
> > +
> > + if (copy_to_user(out_fences_ptr, out_fences,
> > + num_fences * sizeof(*out_fences))) {
> > + ret = -EFAULT;
> > + goto out;
> > + }
> > +
> > + kfree(out_fences);
> > +
> > + return fence_state;
> > +
> > +out:
> > + for (i = 0 ; i < count_out_fences ; i++) {
> > + if (fence_state[i].sync_file)
> > + sync_file_put(fence_state[i].sync_file);
> > + if (fence_state[i].fd >= 0)
> > + put_unused_fd(fence_state[i].fd);
> > + }
> > +
> > + kfree(fence_state);
> > + kfree(out_fences);
> > +
> > + return ERR_PTR(ret);
> > +}
> > +
> > +static void install_out_fence(uint64_t count_out_fences,
> > +      struct drm_out_fence_state *state)
> > +{
> > + int i;
> > +
> > + for (i = 0 ; i < count_out_fences ; i++) {
> > + if (state[i].sync_file)
> > + sync_file_install(state[i].sync_file, state[i].fd);
>
> Is sync_file_install anything more than fd_install? Imo a wrapper for
> just that function is overkill and just hides stuff. I'd nuke it (another
> sync_file patch though). In dma-buf we also don't wrap it, we only have a
> convenience wrapper for users who want to combine the
> get_unused_flags+fd_install in one go. And maybe even that is silly.
>
> Ok, I unlazied and it's indeed just a silly wrapper. Please nuke it.

already fixed in the sync file de-stage patches.

>
> > + }
> > +}
> > +
> > +static void release_out_fence(uint64_t count_out_fences,
> > +      struct drm_out_fence_state *state)
> > +{
> > + int i;
> > +
> > + for (i = 0 ; i < count_out_fences ; i++) {
> > + if (state->sync_file)
> > + sync_file_put(state->sync_file);
> > + if (state->fd >= 0)
> > + put_unused_fd(state->fd);
> > + }
> > +}
> > +
> >  int drm_mode_atomic_ioctl(struct drm_device *dev,
> >    void *data, struct drm_file *file_priv)
> >  {
> > @@ -1574,12 +1701,14 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
> >   uint32_t __user *count_props_ptr = (uint32_t __user *)(unsigned long)(arg->count_props_ptr);
> >   uint32_t __user *props_ptr = (uint32_t __user *)(unsigned long)(arg->props_ptr);
> >   uint64_t __user *prop_values_ptr = (uint64_t __user *)(unsigned long)(arg->prop_values_ptr);
> > + uint32_t __user *out_fences_ptr = (uint32_t __user *)(unsigned long)(arg->out_fences_ptr);
> >   unsigned int copied_objs, copied_props;
> >   struct drm_atomic_state *state;
> >   struct drm_modeset_acquire_ctx ctx;
> >   struct drm_plane *plane;
> >   struct drm_crtc *crtc;
> >   struct drm_crtc_state *crtc_state;
> > + struct drm_out_fence_state *fence_state = NULL;
> >   unsigned plane_mask;
> >   int ret = 0;
> >   unsigned int i, j;
> > @@ -1605,9 +1734,13 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
> >   !dev->mode_config.async_page_flip)
> >   return -EINVAL;
> >  
> > + if ((arg->flags & DRM_MODE_ATOMIC_OUT_FENCE) && !arg->count_out_fences)
> > + return -EINVAL;
>
> We need testcases which check that arg->count_out_fences and
> arg->out_fences are 0 when the OUT_FENCE flag is not set.
>
> Definitely needs an igt testcase for this invalid input case. Ofc we also
> need tests that give the kernel nonsens in count_out_fences and out_fences
> with the flag set.
>
> > +
> >   /* can't test and expect an event at the same time. */
> >   if ((arg->flags & DRM_MODE_ATOMIC_TEST_ONLY) &&
> > - (arg->flags & DRM_MODE_PAGE_FLIP_EVENT))
> > + (arg->flags & (DRM_MODE_PAGE_FLIP_EVENT
> > + | DRM_MODE_ATOMIC_OUT_FENCE)))
>
> If you go with my suggestion above to create the event if either is set,
> maybe a DRM_MODE_ATOMIC_EVENT_MASK with both? Would read easier.

Ok.

Gustavo
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Ville Syrjälä-2
In reply to this post by Gustavo Padovan
On Thu, Apr 28, 2016 at 11:36:44AM -0300, Gustavo Padovan wrote:

> 2016-04-27 Daniel Stone <[hidden email]>:
>
> > Hi,
> >
> > On 26 April 2016 at 21:48, Greg Hackmann <[hidden email]> wrote:
> > > On 04/26/2016 01:05 PM, Daniel Vetter wrote:
> > >> On Tue, Apr 26, 2016 at 09:55:06PM +0300, Ville Syrjälä wrote:
> > >>> What are they doing that can't stuff the fences into an array
> > >>> instead of props?
> > >>
> > >> The hw composer interface is one in-fence per plane. That's really the
> > >> major reason why the kernel interface is built to match. And I really
> > >> don't think we should diverge just because we have a slight different
> > >> color preference ;-)
> > >
> > > The relationship between layers and fences is only fuzzy and indirect
> > > though.  The relationship is really between the buffer you're displaying on
> > > that layer, and the fence representing the work done to render into that
> > > buffer.  SurfaceFlinger just happens to bundle them together inside the same
> > > struct hwc_layer_1 as an API convenience.
> >
> > Right, and when using implicit fencing, this comes as a plane
> > property, by virtue of plane -> fb -> buffer -> fence.
> >
> > > Which is kind of splitting hairs as long as you have a 1-to-1 relationship
> > > between layers and DRM planes.  But that's not always the case.
> >
> > Can you please elaborate?
> >
> > > A (per-CRTC?) array of fences would be more flexible.  And even in the cases
> > > where you could make a 1-to-1 mapping between planes and fences, it's not
> > > that much more work for userspace to assemble those fences into an array
> > > anyway.
> >
> > As Ville says, I don't want to go down the path of scheduling CRTC
> > updates separately, because that breaks MST pretty badly. If you don't
> > want your updates to display atomically, then don't schedule them
> > atomically ... ? That's the only reason I can see for making fencing
> > per-CRTC, rather than just a pile of unassociated fences appended to
> > the request. Per-CRTC fences also forces userspace to merge fences
> > before submission when using multiple planes per CRTC, which is pretty
> > punitive.
> >
> > I think having it semantically attached to the plane is a little bit
> > nicer for tracing (why was this request delayed? -> a fence -> which
> > buffer was that fence for?) at a glance. Also the 'pile of appended
> > fences' model is a bit awkward for more generic userspace, which
> > creates a libdrm request and builds it (add a plane, try it out, wind
> > back) incrementally. Using properties makes that really easy, but
> > without properties, we'd have to add separate codepaths - and thus
> > separate ABI, which complicates distribution - to libdrm to account
> > for a separate plane array which shares a cursor with the properties.
> > So for that reason if none other, I'd really prefer not to go down
> > that route.
>
> I also agree to have it as FENCE_FD prop on the plane. Summarizing the
> arguments on this thread, they are:

Your "summary" forgot to include any counter arguments.

>
>  - implicit fences also needs one fence per plane/fb, so it will be good to    
>    match with that.                                                            

We would actually need a fence per object rather than per fb.

>  - requires userspace to always merge fences                                    

"doesn't?" but that's not true if it's an array. It would be true if
you had just one fence for the whole thing, or one per crtc.

>  - can use standard plane properties, making kernel and userspace life easier,  
>    an array brings more work to build the atomic request plus extra checkings  
>    on the kernel.                                                              

I don't really get this one. The objects and props are arrays too. Why is
another array so problematic?

>  - do not need to changes to drivers                                            
>  - better for tracing, can identify the buffer/fence promptly

Can fences be reused somehow while still attached to a plane, or ever?
That might cause some oddness if you, say, leave a fence attached to one
plane and then do a modeset on another crtc perhaps which needs to turn
the first crtc off+on to reconfigure something.

--
Ville Syrjälä
Intel OTC
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Daniel Vetter
On Thu, Apr 28, 2016 at 6:56 PM, Ville Syrjälä
<[hidden email]> wrote:
>>  - better for tracing, can identify the buffer/fence promptly
>
> Can fences be reused somehow while still attached to a plane, or ever?
> That might cause some oddness if you, say, leave a fence attached to one
> plane and then do a modeset on another crtc perhaps which needs to turn
> the first crtc off+on to reconfigure something.

Fences auto-disappear of course and don't stick around when you
duplicate the drm_plane_state again. I still don't really get the real
concerns though ... In the end it's purely a transport question, and
both ABI ideas work out semantically exactly the same in the end. It's
just that at least in my opinion FENCE_FD prop is a lot more
convenient.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Ville Syrjälä-2
On Thu, Apr 28, 2016 at 07:43:16PM +0200, Daniel Vetter wrote:

> On Thu, Apr 28, 2016 at 6:56 PM, Ville Syrjälä
> <[hidden email]> wrote:
> >>  - better for tracing, can identify the buffer/fence promptly
> >
> > Can fences be reused somehow while still attached to a plane, or ever?
> > That might cause some oddness if you, say, leave a fence attached to one
> > plane and then do a modeset on another crtc perhaps which needs to turn
> > the first crtc off+on to reconfigure something.
>
> Fences auto-disappear of course and don't stick around when you
> duplicate the drm_plane_state again. I still don't really get the real
> concerns though ...

Properties that magically change values shouldn't exist IMO. I guess if
you could have write-only properties or something it migth be sensible?

> In the end it's purely a transport question, and
> both ABI ideas work out semantically exactly the same in the end. It's
> just that at least in my opinion FENCE_FD prop is a lot more
> convenient.
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch

--
Ville Syrjälä
Intel OTC
Reply | Threaded
Open this post in threaded view
|

Re: [RFC v2 5/8] drm/fence: add in-fences support

Gustavo Padovan-2
2016-04-28 Ville Syrjälä <[hidden email]>:

> On Thu, Apr 28, 2016 at 07:43:16PM +0200, Daniel Vetter wrote:
> > On Thu, Apr 28, 2016 at 6:56 PM, Ville Syrjälä
> > <[hidden email]> wrote:
> > >>  - better for tracing, can identify the buffer/fence promptly
> > >
> > > Can fences be reused somehow while still attached to a plane, or ever?
> > > That might cause some oddness if you, say, leave a fence attached to one
> > > plane and then do a modeset on another crtc perhaps which needs to turn
> > > the first crtc off+on to reconfigure something.
> >
> > Fences auto-disappear of course and don't stick around when you
> > duplicate the drm_plane_state again. I still don't really get the real
> > concerns though ...
>
> Properties that magically change values shouldn't exist IMO. I guess if
> you could have write-only properties or something it migth be sensible?

We can just not return FENCE_FD on get_props, that would make it
write-only.

        Gustavo
123