What is RTOS? Explain the different types of RTOSes?
Answers
Answered by
0
A real-time operating system (RTOS ) is an operating system (OS)intended to serve real-time applications that process data as it comes in, typically without buffer delays.Processing time requirements are measured in tenths of seconds or shorter increments of time.A real time system is a time bound system which has well defined fixed time constraints.
List of operating systems for all types of operating systems.
Design philosophies-The most common designs are
Event-driven – switches tasks only when an event of higher priority needs servicing; called preemptive priority, or priority scheduling.
Time-sharing–switches tasks on a regular clocked interrupt, and on events; called round robin.
Scheduling
In typical designs, a task has three states:
1. Running (executing on the CPU);2. Ready (ready to be executed);3. Blocked (waiting for an event, I/O for example).
Algorithms-Some commonly used RTOS scheduling algorithms are:
Cooperative scheduling,Preemptive scheduling,Rate-monotonic scheduling,Round-robin scheduling,Fixed priority pre-emptive scheduling, an implementation of preemptive time slicing
Fixed-Priority Scheduling with Deferred Preemption,Fixed-Priority Non-preemptive Scheduling,Critical section preemptive scheduling,Static time scheduling,Earliest Deadline First approach
There are three common approaches to resolve this problem:
Temporarily masking/disabling interrupts-General-purpose operating systems usually do not allow user programs to mask (disable) interrupts, because the user program could control the CPU for as long as it wishes. Some modern CPUs don't allow user mode code to disable interrupts as such control is considered a key operating system resource. Many embedded systems and RTOSs, however, allow the application itself to run in kernel mode for greater system call efficiency and also to permit the application to have greater control of the operating environment without requiring OS intervention.
Mutexes-When the shared resource must be reserved without blocking all other tasks (such as waiting for Flash memory to be written), it is better to use mechanisms also available on general-purpose operating systems, such as a mutex and OS-supervised interprocess messaging. Such mechanisms involve system calls, and usually invoke the OS's dispatcher code on exit, so they typically take hundreds of CPU instructions to execute, while masking interrupts may take as few as one instruction on some processors.
Message passing-The other approach to resource sharing is for tasks to send messages in an organized message passing scheme. In this paradigm, the resource is managed directly by only one task. When another task wants to interrogate or manipulate the resource, it sends a message to the managing task. Although their real-time behavior is less crisp than semaphore systems, simple message-based systems avoid most protocol deadlock hazards, and are generally better-behaved than semaphore systems. However, problems like those of semaphores are possible. Priority inversion can occur when a task is working on a low-priority message and ignores a higher-priority message (or a message originating indirectly from a high priority task) in its incoming message queue. Protocol deadlocks can occur when two or more tasks wait for each other to send response messages.
Interrupt handlers and the scheduler-Since an interrupt handler blocks the highest priority task from running, and since real time operating systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short as possible. The interrupt handler defers all interaction with the hardware if possible; typically all that is necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt handler returns) and notify a task that work needs to be done. This can be done by unblocking a driver task through releasing a semaphore, setting a flag or sending a message. A scheduler often provides the ability to unblock a task from interrupt handler context.
Memory allocation-Memory allocation is more critical in a real-time operating system than in other operating systems.
First, for stability there cannot be memory leaks (memory that is allocated, then unused but never freed). The device should work indefinitely, without ever needing a reboot. For this reason, dynamic memory allocation is frowned upon.Whenever possible, all required memory allocation is specified statically at compile time.
List of operating systems for all types of operating systems.
Design philosophies-The most common designs are
Event-driven – switches tasks only when an event of higher priority needs servicing; called preemptive priority, or priority scheduling.
Time-sharing–switches tasks on a regular clocked interrupt, and on events; called round robin.
Scheduling
In typical designs, a task has three states:
1. Running (executing on the CPU);2. Ready (ready to be executed);3. Blocked (waiting for an event, I/O for example).
Algorithms-Some commonly used RTOS scheduling algorithms are:
Cooperative scheduling,Preemptive scheduling,Rate-monotonic scheduling,Round-robin scheduling,Fixed priority pre-emptive scheduling, an implementation of preemptive time slicing
Fixed-Priority Scheduling with Deferred Preemption,Fixed-Priority Non-preemptive Scheduling,Critical section preemptive scheduling,Static time scheduling,Earliest Deadline First approach
There are three common approaches to resolve this problem:
Temporarily masking/disabling interrupts-General-purpose operating systems usually do not allow user programs to mask (disable) interrupts, because the user program could control the CPU for as long as it wishes. Some modern CPUs don't allow user mode code to disable interrupts as such control is considered a key operating system resource. Many embedded systems and RTOSs, however, allow the application itself to run in kernel mode for greater system call efficiency and also to permit the application to have greater control of the operating environment without requiring OS intervention.
Mutexes-When the shared resource must be reserved without blocking all other tasks (such as waiting for Flash memory to be written), it is better to use mechanisms also available on general-purpose operating systems, such as a mutex and OS-supervised interprocess messaging. Such mechanisms involve system calls, and usually invoke the OS's dispatcher code on exit, so they typically take hundreds of CPU instructions to execute, while masking interrupts may take as few as one instruction on some processors.
Message passing-The other approach to resource sharing is for tasks to send messages in an organized message passing scheme. In this paradigm, the resource is managed directly by only one task. When another task wants to interrogate or manipulate the resource, it sends a message to the managing task. Although their real-time behavior is less crisp than semaphore systems, simple message-based systems avoid most protocol deadlock hazards, and are generally better-behaved than semaphore systems. However, problems like those of semaphores are possible. Priority inversion can occur when a task is working on a low-priority message and ignores a higher-priority message (or a message originating indirectly from a high priority task) in its incoming message queue. Protocol deadlocks can occur when two or more tasks wait for each other to send response messages.
Interrupt handlers and the scheduler-Since an interrupt handler blocks the highest priority task from running, and since real time operating systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short as possible. The interrupt handler defers all interaction with the hardware if possible; typically all that is necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt handler returns) and notify a task that work needs to be done. This can be done by unblocking a driver task through releasing a semaphore, setting a flag or sending a message. A scheduler often provides the ability to unblock a task from interrupt handler context.
Memory allocation-Memory allocation is more critical in a real-time operating system than in other operating systems.
First, for stability there cannot be memory leaks (memory that is allocated, then unused but never freed). The device should work indefinitely, without ever needing a reboot. For this reason, dynamic memory allocation is frowned upon.Whenever possible, all required memory allocation is specified statically at compile time.
Similar questions