Question

In Chapter 5, we discussed possible race conditions on various kernel data structures. Most scheduling algorithms...

In Chapter 5, we discussed possible race conditions on various kernel data structures. Most scheduling algorithms maintain a run queue, whichlistsprocesseseligibletorunonaprocessor.Onmulticoresystems, there are two general options: (1) each processing core has its own run queue, or (2) a single run queue is shared by all processing cores. What are the advantagesand disadvantages of each of these approaches?

0 0
Add a comment Improve this question Transcribed image text
✔ Recommended Answer
Answer #1
  • The main advantage of every processing core having its own run queue is that there is no contention over a solo run queue when the scheduler is running simultaneously on two or more processors.
  • When a scheduling decision should be made for a processing core, the scheduler only want to look no further than its private run queue.
  • A disadvantage of a particular single run queue is that it have to be protected with locks to avoid a race condition and a processing core may be accessible to run a thread, yet it must first acquire the lock to regain the thread from the single queue.
  • Though, load balancing would expected not be an issue with a single run queue, while when each processing core has its own run queue, there should be some sort of load balancing between the dissimilar run queues.
Add a comment
Know the answer?
Add Answer to:
In Chapter 5, we discussed possible race conditions on various kernel data structures. Most scheduling algorithms...
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Similar Homework Help Questions
ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
ADVERTISEMENT
Active Questions
ADVERTISEMENT