[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] x86/HPET: cache MSI message last written


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Keir Fraser <keir.xen@xxxxxxxxx>
  • Date: Thu, 18 Oct 2012 17:42:00 +0100
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 18 Oct 2012 16:42:29 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac2tT36ACrPQ3CGQ2kKh5v++DW6VrQ==
  • Thread-topic: [Xen-devel] [PATCH 3/3] x86/HPET: cache MSI message last written

On 18/10/2012 11:39, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:

>>>> On 18.10.12 at 10:22, Keir Fraser <keir@xxxxxxx> wrote:
>> On 16/10/2012 16:11, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> 
>>> Rather than spending measurable amounts of time reading back the most
>>> recently written message, cache it in space previously unused, and thus
>>> accelerate the CPU's entering of the intended C-state.
>>> 
>>> hpet_msi_read() ends up being unused after this change, but rather than
>>> removing the function, it's being marked "unused" in order - that way
>>> it can easily get used again should a new need for it arise.
>> 
>> Please use __attribute_used__
> 
> That wouldn't be correct: The function _is_ unused (and there's
> no issue if it was used afaik), and the __used__ attribute ought
> to tell the compiler to keep the function around despite not
> having (visible to it) callers.

Perhaps our __attribute_used__ definition should change, then?

 -- Keir

> Jan
> 
>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> 
>> Acked-by: Keir Fraser <keir@xxxxxxx>
>> 
>>> --- a/xen/arch/x86/hpet.c
>>> +++ b/xen/arch/x86/hpet.c
>>> @@ -253,17 +253,19 @@ static void hpet_msi_mask(struct irq_des
>>>  
>>>  static void hpet_msi_write(struct hpet_event_channel *ch, struct msi_msg
>>> *msg)
>>>  {
>>> +    ch->msi.msg = *msg;
>>>      if ( iommu_intremap )
>>>          iommu_update_ire_from_msi(&ch->msi, msg);
>>>      hpet_write32(msg->data, HPET_Tn_ROUTE(ch->idx));
>>>      hpet_write32(msg->address_lo, HPET_Tn_ROUTE(ch->idx) + 4);
>>>  }
>>>  
>>> -static void hpet_msi_read(struct hpet_event_channel *ch, struct msi_msg
>> *msg)
>>> +static void __attribute__((__unused__))
>>> +hpet_msi_read(struct hpet_event_channel *ch, struct msi_msg *msg)
>>>  {
>>>      msg->data = hpet_read32(HPET_Tn_ROUTE(ch->idx));
>>>      msg->address_lo = hpet_read32(HPET_Tn_ROUTE(ch->idx) + 4);
>>> -    msg->address_hi = 0;
>>> +    msg->address_hi = MSI_ADDR_BASE_HI;
>>>      if ( iommu_intremap )
>>>          iommu_read_msi_from_ire(&ch->msi, msg);
>>>  }
>>> @@ -285,20 +287,19 @@ static void hpet_msi_ack(struct irq_desc
>>>  
>>>  static void hpet_msi_set_affinity(struct irq_desc *desc, const cpumask_t
>>> *mask)
>>>  {
>>> -    struct msi_msg msg;
>>> -    unsigned int dest;
>>> +    struct hpet_event_channel *ch = desc->action->dev_id;
>>> +    struct msi_msg msg = ch->msi.msg;
>>>  
>>> -    dest = set_desc_affinity(desc, mask);
>>> -    if (dest == BAD_APICID)
>>> +    msg.dest32 = set_desc_affinity(desc, mask);
>>> +    if ( msg.dest32 == BAD_APICID )
>>>          return;
>>>  
>>> -    hpet_msi_read(desc->action->dev_id, &msg);
>>>      msg.data &= ~MSI_DATA_VECTOR_MASK;
>>>      msg.data |= MSI_DATA_VECTOR(desc->arch.vector);
>>>      msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK;
>>> -    msg.address_lo |= MSI_ADDR_DEST_ID(dest);
>>> -    msg.dest32 = dest;
>>> -    hpet_msi_write(desc->action->dev_id, &msg);
>>> +    msg.address_lo |= MSI_ADDR_DEST_ID(msg.dest32);
>>> +    if ( msg.data != ch->msi.msg.data || msg.dest32 != ch->msi.msg.dest32 )
>>> +        hpet_msi_write(ch, &msg);
>>>  }
>>>  
>>>  /*
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxx
>>> http://lists.xen.org/xen-devel
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.