[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 01/10] xen/arm: vpl011: Add pl011 uart emulation in Xen
On Tue, 25 Apr 2017, Bhupinder Thakur wrote: > On 20 April 2017 at 00:10, Stefano Stabellini <sstabellini@xxxxxxxxxx> wrote: > > We don't have a formal protocol spec for PV console, but if we had, it > > would say that the frontend (Xen in this case) should send notifications > > every time there is new data to write. Thus, once per character in this > > case. > > > > I would start with a dumb implementation like that, simply notify the > > other end every time. Then, in a separate patch, we can consider > > introducing optimizations, which need to be well explained. > > > Ok. I will add this optimisation as a separate patch later. > > > > > Regarding the optimization you introduced in this patch, delaying write > > notifications until we receive a notification from xenconsoled, how many > > notifications from xen to xenconsoled does it actually save? xenconsoled > > is going to send a notification for every read: we might end up sending > > the same number of notifications, only delayed. > > > > > In the PV console design, the events from the guest are sent to > xenconsole for every chunk of data. Since in the pl011 emulation, the > data comes in bytes only, it would generate lot of events to > xenconsole. To reduce the flurry of events, this optimisation was > added. > > Xenconsole sends an event in the following conditions: > > 1. There is data available for Xen to process > 2. It has finished processing the data and Xen can send more data > > In the 2nd case, xenconsole will keep reading the data from the ring > buffer until it goes empty. At that point, it would send an event to > Xen. Between sending of this event and processing of this event by > Xen, there could be more data added for the xenconsole to process. > While handling an event, the Xen will check for that condition and if > there is data to be processed by xenconsole, it would send an event. > > Also sending delayed events helps with the rate limit check in > xenconsole. If there are too many events, they maybe masked by > xenconsole. I could test whether this rate limit check is really > getting hit with and without this optimisation. I understand the idea behind, my question is whether this approach was actually verified by any scientific measurements. Did you run a test to count how many notifications were skipped thanks to this optimization? If so, what was the scenario? How many notifications were saved? If you didn't run a test, I suggest you do :-) > >> > >> Since this code is under LOCK, the IN and OUT ring buffers will not be > >> updated by the guest. Specifically, the following transitions are > >> ruled out: > >> > >> IN ring buffer > >> non-empty ----> empty (as the reader is blocked due to lock) > >> > >> OUT ring buffer > >> not-full ----> full (as writer is blocked due to lock). > >> > >> So the code inside the IF block remains valid even if the buffer state > >> changes. > >> For the IN ring buffer it can go from non-empty to full. Similarly for > >> OUT ring buffer it can go from FULL to empty. > >> > >> Also checking the latest buffer index (instead of checking buffer > >> index read as local variables) allows to update the pl011 state at the > >> earliest. > > > > I understand that the IN state shouldn't change. However, like you say, > > the OUT state can change between the VPL011_OUT_RING_FULL and the > > VPL011_OUT_RING_EMPTY checks. > > > > It is a bad idea to try to write lockless code against a potentially > > changing state. It is already hard enough without adding that > > complexity; the other end can find ways to cause problems. Please take > > a snapshot of the out indexes and use the local variables, rather than > > "intf", to do the checks. > > > I will use local variables to do these checks. > > > Regards, > Bhupinder > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |