[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 2/2] xen/kbdif: add multi-touch support



On 01/23/2017 09:49 PM, Stefano Stabellini wrote:
On Sat, 21 Jan 2017, Oleksandr Andrushchenko wrote:
In the mail-thread you mentioned above there is a picture of
the xenstore entries and conclusion:

1. No change to the existing kbd+ptr:
1.1. They still share a dedicated (single) connection
channel as they did before multi-touch: page-ref + event-channel:

/local/domain/2/device/vkbd/0/page-ref = "1248025"
/local/domain/2/device/vkbd/0/page-gref = "336"
/local/domain/2/device/vkbd/0/event-channel = "43"

1.2. No multi-touch events via that connection
1.3. Old backends/frontends will see no difference
after multi-touch

2. Each multi-touch device has its own:
2.1. Configuration (width x height, num-contacts)
2.2. Own connection channel (page-ref, event-channel)

/local/domain/2/device/vkbd/0/mtouch/0/width = "3200"
/local/domain/2/device/vkbd/0/mtouch/0/height = "3200"
/local/domain/2/device/vkbd/0/mtouch/0/num-contacts = "5"
/local/domain/2/device/vkbd/0/mtouch/0/page-ref = "1252386"
/local/domain/2/device/vkbd/0/mtouch/0/page-gref = "335"
/local/domain/2/device/vkbd/0/mtouch/0/event-channel = "44"

/local/domain/2/device/vkbd/0/mtouch/1/width = "6400"
/local/domain/2/device/vkbd/0/mtouch/1/height = "6400"
/local/domain/2/device/vkbd/0/mtouch/1/num-contacts = "10"
/local/domain/2/device/vkbd/0/mtouch/1/page-ref = "2376038"
/local/domain/2/device/vkbd/0/mtouch/1/page-gref = "337"
/local/domain/2/device/vkbd/0/mtouch/1/event-channel = "45"
So, for the example above: 1 kbd + 1 ptr + 2 mt devices
within a single driver, *3 connections*

There are no ring-ref and event-channel under mtouch, are there?
there are ring-refs and event-channels under mtouch *per multi-touch*
device
Now, it is clear. Sorry it took so many back and forth.
np
page-ref and event-channel should definitely be described under
"Multi-touch Device Parameters". When I asked you to remove them, I
meant not just from the description but also from the protocol. This is
where the confusion originated: in the current patch the two parameters
are not described, hence I assumed they didn't exist and you were
reusing the existing ring (/local/domain/2/device/vkbd/0/page-ref).
understood

Aside from new message structs, which look fine to me, we have two
important choices to make:

a) number of multitouch devices per PV frontend/backend connection
(By frontend/backend connection, I mean by xenstore device,
such as /local/domain/2/device/vkbd/0.)

b) number of multitouch devices sharing the same ring
(By ring I mean page-ref + event channel.)

Both these choices need to be motivated. As I wrote in the past, I would
go for 1 multitouch device per frontend/backend connection, reusing the
existing ring (/local/domain/2/device/vkbd/0/page-ref and
/local/domain/2/device/vkbd/0/event-channel), because I assume that
there won't be many multitouch devices, less than 5,
yes, I have 2 mt devices in my use-case
so we can afford to
create new PV devices for each of them.
see at the bottom
  I am also assuming that it is
fine for them to share ring with the keyboard or pointer devices without
effects on performance because they should be all low frequency devices.
sounds reasonable, but see below
The current proposal supports multiple multitouch devices per
frontend/backnend connection and allocates a new dedicated ring for each
multitouch device. Your proposal might very well be the best solution,
but why? The architectural choices need to be motivated. How many
multitouch devices is reasonable to expect to be present on a platform?
What is the multitouch events frequency we expect from them? The answer
to the first question motivates choice a), the answer to the second
question motivates choice b). Please add them both to the protocol
description.  It is OK to add the explanation to kbdif.h. You also
mentioned something about multiple PV devices causing troubles to Linux.
If that is an issue somehow, please add info about that too.
Well, all the above sounds reasonable, some comments:
1. Event frequency: here we are talking about 60-120(?)Hz
scan frequency of a touch screen generating at least 2
events for each contact: frame (syn) and position.
So, we can estimate for 10 fingers the value to be
60 * 2 * 10 = 1200 events a second per multi-touch device.
Of course, this is rough estimate which I believe would
be smaller in real life.

2. "create new PV devices for each of them"
2.1. Driver
This is something which you are seeing in xenstore as
  /local/domain/2/device/vkbd/0
0 - being index of the *driver*.
And the challenge here is that if you want multiple *devices*
to be allocated in this way:
  /local/domain/2/device/vkbd/0
  /local/domain/2/device/vkbd/1
  /local/domain/2/device/vkbd/2
then you have to register multiple *DRIVERS*.
What we normally do in the PV front driver is we call
*xenbus_register_frontend*,passing a structure
*struct xenbus_driver* with *.ids* set to a single entry "vkbd".

I am a little bit confused here about front implementation:
how to dynamically register multiple "vkbd" *drivers* depending on
configuration, so we have the layout you mention.

2.2. Input device
You are virtually not limited on number of *DEVICES within a DRIVER*,
but they all are under "/local/domain/2/device/vkbd/0" entry.
That being said, under the same xenstore entry you can allocate
ptr+kbd+N*mt input devices.

Bottom line:
================
a) From (1), probably it would be fine to send these events
over the existing ring (kbd+ptr), but mt event structures
need an index to indicate which mt device is the owner of
the event
b) We may still want to have a dedicated ring either for all
mt devices (depending on event frequency we consider reasonable)
or per mt-device.
c) I see no simple way to support multiple drivers comparing to
multiple devices (*DISCLAIMER* which is natural to Linux).

All,
what do you think on the above? What is the approach we are going to
take?

Thank you,
Oleksandr

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.