[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [Xen-ia64-devel] [PATCH] Add event callback for xen/ia64
Following is the data I collected which is similar to yours. I
have domU memory 512M configured, and the absolute values
below are different due to hardware difference but relative
comparison result is similar:
[By wget, 3 experiments]
Before applying event patch:
Dom0 wget: 10.33MB
DomU wget: 3.85MB
Both wget: 8.30MB-dom0, 1.25MB-domU
After applying event patch:
Dom0 wget: 10.27MB
DomU wget: 0.8MB
Both wget: 9.78MB-dom0, 0.9MB-domU
As you observed:
- single domU get is worse with event patch, but increased a
bit with both wgets.
- For both wgets, dom0 is affected more without event patch.
Only difference is, domU also decreased a lot when both wgets,
even without event patch.
In a word, there's no easy way to tune performance as long as
interrupts and events are still strictly split into two groups as I mentioned
in last mail. You always lose one side. However once the final event layer is
ready (should be soon), we can do more accurate tuning upon whole system
Then... could you consider to pull this event patch in?
>[mailto:xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
>Sent: 2006年4月12日 13:59
>To: Alex Williamson
>Cc: Tristan Gingold; xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>Subject: RE: [Xen-ia64-devel] [PATCH] Add event callback for xen/ia64
>>From: Alex Williamson [mailto:alex.williamson@xxxxxx]
>>Sent: 2006年4月12日 13:24
>>> It appears dom0 is not affected and has good throughput.
>>> interesting observation though; if I wget from dom0 and domU at the
>>> time, dom0 is a little slower (~82MB/s), but the domU throughput
>>> actually increases while the wget in dom0 is running. According to
>>> nload, domU's throughput nearly doubles for the brief time dom0's
>>> is running. Even wget'ing a different file off the server with dom0
>>> causes domUs wget to speed up. Perhaps an indication that your
>>> hunch is correct? Thanks,
>> In contrast, w/o the event channel patch, domU is unaffected by a
>>concurrent wget in dom0 (fairly constant ~10.5MB/s). However, dom0
>>runs at around 9MB/s while domU's transfer is running.
>This may be explained as following:
>Before apply my patch, all the inter-domain events are bound a
>vector 233 which is higher than any external device interrupts.
>In this case, vnif request is always handled immediately without
>delay however real nic interrupt is responded with lower priority.
>By this way, the vnif path is high satisfied, however the normal
>eth0 driver of dom0 is affected a lot by virtual driver(vbd/vnif).
>Thus dom0's network output is decreased in this way
>After applying my patch, all the inter-domain events are placed
>lower than all interrupt vectors. So in contrast, the real nic traffic
>can be handled immediately while requests from virtual drivers
>are delayed/pended a lot by any interrtups. So domU network
>Actually this is why event channel mechanism is proposed by
>xen/x86 and we're now approaching ia64 to this direction too.
>Current patch I sent out is only an intermediate step which have
>two separate paths for events and interrupts. Finally there'll be
>only one event path and at that time we can do more fine-grained
>control to the priority per event.
>So do you think whether this patch is available for check-in now?
>We can do more accurate tuning after the whole model is ready
>and p2m patch is merged in.
>Anyway, I'm still making some measurement now. I may provide
>a version matching current behavior if you like, for example
>increasing event path priority, which however may not worth doing
>since it's only temporary solution. :-)
>Xen-ia64-devel mailing list
Xen-ia64-devel mailing list