[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/1] Xen ARINC 653 Scheduler (updated to add support for CPU pools)
Oh, I see. Well, the cause is that the common/schedule.c:sched_adjust_global() is broken. But, what should it actually do, given that multiple schedulers of same or differing types may exist in a system now? Perhaps the sysctl should take a cpupool id, to uniquely identify the scheduler instance to be adjusted? -- Keir On 16/06/2010 17:40, "Kathy Hadley" <Kathy.Hadley@xxxxxxxxxxxxxxx> wrote: > Keir, George, et. al., > I definitely saw two "ops" values. When the .init function was called, ops > = 0xFF213DC0; I then used xmalloc() to allocate memory for the scheduler data > structure and set ops->sched_data equal to the address of that memory block > (similar to what is done in csched_init in sched_credit.c). When the > .adjust_global function was called, ops = 0xFF2112D0 and ops->sched_data was > not equal to the address of the memory block allocated in the .init function > (it was equal to the value set when "sched_arinc653_def" was declared). > > Regards, > Kathy > >> -----Original Message----- >> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx] >> Sent: Wednesday, June 16, 2010 12:32 PM >> To: Kathy Hadley; George Dunlap >> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Juergen Gross >> Subject: Re: [Xen-devel] [PATCH 1/1] Xen ARINC 653 Scheduler (updated >> to add support for CPU pools) >> >> On 16/06/2010 17:25, "Kathy Hadley" <Kathy.Hadley@xxxxxxxxxxxxxxx> >> wrote: >> >>> Keir, >>> I only saw the .init function called once. I downloaded xen- >> unstable on May >>> 27. Were your updates after that? >> >> My changes were done before May 27, and that ties in with you seeing >> .init >> called only once. That being the case, you should not see multiple >> different >> ops structures ('struct scheduler' instances). The only ops struct that >> should exist in the system in this case should be the one statically >> defined >> near the top of common/schedule.c. >> >> -- Keir >> >>> Thanks, >>> Kathy Hadley >>> >>> >>>> -----Original Message----- >>>> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx] >>>> Sent: Wednesday, June 16, 2010 12:20 PM >>>> To: George Dunlap; Kathy Hadley >>>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Juergen Gross >>>> Subject: Re: [Xen-devel] [PATCH 1/1] Xen ARINC 653 Scheduler >> (updated >>>> to add support for CPU pools) >>>> >>>> On 16/06/2010 17:14, "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx> >>>> wrote: >>>> >>>>>> I actually tried the xmalloc() method first. I found that when >> the >>>>>> .adjust_global function was called, the address of the "ops" data >>>> structure >>>>>> passed to that function was different from the address of the >> "ops" >>>> data >>>>>> structure when the .init function was called. I wanted to use >>>> .adjust_global >>>>>> to modify the data structure that was created when the .init >>>> function was >>>>>> called, but I could not figure out a way to get the address of the >>>> second >>>>>> data structure. Suggestions? >>>>> >>>>> It's been a month or two since I trawled through the cpupools code; >>>>> but I seem to recall that .init is called twice -- once for the >>>>> "default pool" (cpupool0), and once for an actually in-use pool. >>>>> (Juergen, can you correct me if I'm wrong?) Is it possible that >>>>> that's the difference in the pointers that you're seeing? >>>> >>>> Oh yes, that was the old behaviour. I took a hatchet to the >>>> scheduler/cpupool interfaces a few weeks ago and now we should only >>>> initialise the scheduler once, unless extra cpupools are manually >>>> created. >>>> The fact that Kathy is seeing two different ops structures probably >>>> indicates that her xen-unstable tree is very out of date. Which may >>>> also >>>> mean that the patch will not apply to current tip. >>>> >>>> -- Keir >>>> >>> >> > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |