Uc/os-ii the real-time kernel pdf




















Had we raised the priority of Task 3 before accessing the resource and then lowered it back when done, we would have wasted valuable CPU time. What is really needed to avoid priority inversion is a kernel that changes the priority of a task automatically. There are, however, some commercial kernels that do. Figure illustrates what happens when a kernel supports priority inheritance.

As with the previous example, Task 3 is running F 1 and then acquires a semaphore to access a shared resource F 2. Task 3 accesses the resource F 3 and then gets preempted by Task 1 F 4. Task 1 executes F 5 and then tries to obtain the semaphore F 6.

The kernel sees that Task 3 has the semaphore but has a lower priority than Task 1. In this case, the kernel raises the priority of Task 3 to the same level as Task 1. The kernel then switches back to Task 3 so that this task can continue with the resource F 7. When Task 3 is done with the resource, it releases the semaphore F 8. At this point, the kernel reduces the priority of Task 3 to its original value and gives the semaphore to Task 1 which is now free to continue F 9.

When Task 1 is done executing F 10 , the medium priority task i. Task 2 gets the CPU F Note that Task 2 could have been ready-to-run anytime between F 3 and F 10 without affecting the outcome. There is still some level of priority inversion but, this really cannot be avoided. In most systems, not all tasks are considered critical. Non-critical tasks should obviously be given low priorities. In a SOFT real-time system, tasks are performed by the system as quickly as possible, but they don't have to finish by specific times.

An interesting technique called Rate Monotonic Scheduling RMS has been established to assign task priorities based on how often tasks execute. Simply put, tasks with the highest rate of execution are given the highest priority see Figure RMS makes a number of assumptions: 1. All tasks are periodic they occur at regular intervals. Tasks do not synchronize with one another, share resources, or exchange data.

The CPU must always execute the highest priority task that is ready to run. In other words, preemptive scheduling must be used. Table 2. The upper bound for an infinite number of tasks is given by ln 2 or 0. Note that you can still have non-time-critical tasks in a system and thus use percent of the CPU's time.

Using percent of your CPU's time is not a desirable goal because it does not allow for code changes and added features. As a rule of thumb, you should always design a system to use less than 60 to 70 percent of your CPU.

Infinity 0. In some cases, the highest-rate task may not be the most important task. Your application will thus dictate how you need to assign priorities. RMS is, however, an interesting starting point. This is especially easy when all the tasks exist in a single address space.

Tasks can thus reference global variables, pointers, buffers, linked lists, ring buffers, etc. While sharing data simplifies the exchange of information, you must ensure that each task has exclusive access to the data to avoid contention and data corruption. The most common methods to obtain exclusive access to shared resources are: a Disabling interrupts b Test-And-Set c Disabling scheduling d Using semaphores 2.

You need to use these macros in pair as shown in listing 2. You must be careful, however, to not disable interrupts for too long because this affects the response of your system to interrupts.

This is known as interrupt latency. You should consider this method when you are changing or copying a few variables. Also, this is the only way that a task can share variables or data structures with an ISR. In all cases, you should keep interrupts disabled for as little time as possible. If you use a kernel, you are basically allowed to disable interrupts for as much time as the kernel does without affecting interrupt latency.

Obviously, you need to know how long the kernel will disable interrupts. Any good kernel vendor will provide you with this information. After all, if they sell a real-time kernel, time is important! To prevent the other function from accessing the resource, however, the first function that gets the resource simply sets the variable to 1. The TAS operation must either be performed indivisibly by the processor or you must disable interrupts when doing the TAS on the variable as shown in listing 2.

Some processors actually implement a TAS operation in hardware e. In this case, two or more tasks can share data without the possibility of contention. You should note that while the scheduler is locked, interrupts are enabled and, if an interrupt occurs while in the critical section, the ISR will immediately be executed. At the end of the ISR, the kernel will always return to the interrupted task even if a higher priority task has been made ready-to-run by the ISR. The scheduler will be invoked when OSSchedUnlock is called to see if a higher priority task has been made ready to run by the task or an ISR.

A context switch will result if there is a higher priority task that is ready to run. Although this method works well, you should avoid disabling the scheduler because it defeats the purpose of having a kernel in the first place.

The next method should be chosen instead. A semaphore is a protocol mechanism offered by most multitasking kernels. Semaphores are used to: a control access to a shared resource mutual exclusion ; b signal the occurrence of an event; c allow two tasks to synchronize their activities.

A semaphore is a key that your code acquires in order to continue execution. If the semaphore is already in use, the requesting task is suspended until the semaphore is released by its current owner. In other words, the requesting task says: "Give me the key.

If someone else is using it, I am willing to wait for it! As its name implies, a binary semaphore can only take two values: 0 or 1.

A counting semaphore allows values between 0 and , or , depending on whether the semaphore mechanism is implemented using 8, 16 or 32 bits, respectively. The actual size depends on the kernel used. Along with the semaphore's value, the kernel also needs to keep track of tasks waiting for the semaphore's availability. The initial value of the semaphore must be provided when the semaphore is initialized. The waiting list of tasks is always initially empty. A task desiring the semaphore will perform a WAIT operation.

If the semaphore is available the semaphore value is greater than 0 , the semaphore value is decremented and the task continues execution.

If the semaphore's value is 0, the task performing a WAIT on the semaphore is placed in a waiting list. Most kernels allow you to specify a timeout; if the semaphore is not available within a certain amount of time, the requesting task is made ready to run and an error code indicating that a timeout has occurred is returned to the caller. If no task is waiting for the semaphore, the semaphore value is simply incremented.

If any task is waiting for the semaphore, however, one of the tasks is made ready to run and the semaphore value is not incremented; the key is given to one of the tasks waiting for it.

Depending on the kernel, the task which will receive the semaphore is either: a the highest priority task waiting for the semaphore, or b the first task that requested the semaphore First In First Out, or FIFO. Some kernels allow you to choose either method through an option when the semaphore is initialized.

If the readied task has a higher priority than the current task the task releasing the semaphore , a context switch will occur with a preemptive kernel and the higher priority task will resume execution; the current task will be suspended until it again becomes the highest priority task ready-to-run. Listing 2. Both of these functions will be described later. Imagine what would happen if two tasks were allowed to send characters to a printer at the same time.

The printer would contain interleaved data from each task. In this case, we can use a semaphore and initialize it to 1 i. The rule is simple: to access the printer each task must first obtain the resource's semaphore.

Figure shows the tasks competing for a semaphore to gain exclusive access to the printer. Note that the semaphore is represented symbolically by a key indicating that each task must obtain this key to use the printer.

The above example implies that each task must know about the existence of the semaphore in order to access the resource. There are situations when it is better to encapsulate the semaphore. Each task would thus not know that it is actually acquiring a semaphore when accessing the resource. For example, an RSC port is used by multiple tasks to send commands and receive responses from a device connected at the other end of the RSC port. A flow diagram is shown in Figure The function CommSendCmd is called with three arguments: the ASCII string containing the command, a pointer to the response string from the device, and finally, a timeout in case the device doesn't respond within a certain amount of time.

Each task which needs to send a command to the device has to call this function. The semaphore is assumed to be initialized to 1 i. The first task that calls CommSendCmd will acquire the semaphore and thus proceed to send the command and wait for a response. If another task attempts to send a command while the port is busy, this second task will be suspended until the semaphore is released.

The second task appears to have simply made a call to a normal function that will not return until the function has performed its duty. When the semaphore is released by the first task, the second task will acquire the semaphore and will thus be allowed to use the RSC port.

A counting semaphore is used when a resource can be used by more than one task at the same time. For example, a counting semaphore is used in the management of a buffer pool as shown in Figure Let's assume that the buffer pool initially contains 10 buffers. A task would obtain a buffer from the buffer manager by calling BufReq.

When the buffer is no longer needed, the task would return the buffer to the buffer manager by calling BufRel. The pseudocode for these functions is shown in listing 2. The buffer manager will satisfy the first 10 buffer requests since there are 10 keys. When all semaphores are used, a task requesting a buffer would be suspended until a semaphore becomes available. Interrupts are disabled to gain exclusive access to the linked list this operation is very quick.

When a task is finished with the buffer it acquired, it calls BufRel to return the buffer to the buffer manager; the buffer is inserted into the linked list before the semaphore is released. By encapsulating the interface to the buffer manager in BufReq and BufRel , the caller doesn't need to be concerned with the actual implementation details.

Semaphores are often overused. The use of a semaphore to access a simple shared variable is overkill in most situations. The overhead involved in acquiring and releasing the semaphore can consume valuable time. You can do the job just as efficiently by disabling and enabling interrupts see section 2.

Let's suppose that two tasks are sharing a bit integer variable. The first task increments the variable while the other task clears it. If you consider how long a processor takes to perform either operation, you will realize that you do not need a semaphore to gain exclusive access to the variable.

Each task simply needs to disable interrupts before performing its operation on the variable and enable interrupts when the operation is complete. A semaphore should be used, however, if the variable is a floating-point variable and the microprocessor doesn't support floating-point in hardware.

In this case, the processing time involved in processing the floating-point variable could affect interrupt latency if you had disabled interrupts.

If task T1 has exclusive access to resource R1 and task T2 has exclusive access to resource R2, then if T1 needs exclusive access to R2 and T2 needs exclusive access to R1, neither task can continue. They are deadlocked. The simplest way to avoid a deadlock is for tasks to: a acquire all resources before proceeding, b acquire the resources in the same order, and c release the resources in the reverse order.

Most kernels allow you to specify a timeout when acquiring a semaphore. This feature allows a deadlock to be broken. If the semaphore is not available withing a certain amount of time, the task requesting the resource will resume execution. Some form of error code must be returned to the task to notify it that a timeout has occurred.

A return error code prevents the task from thinking it has obtained to the resource. Deadlocks generally occur in large multitasking systems and are not generally encountered in embedded systems. Note that, in this case, the semaphore is drawn as a flag, to indicate that it is used to signal the occurrence of an event rather than to ensure mutual exclusion, in which case it would be drawn as a key.

When used as a synchronization mechanism, the semaphore is initialized to 0. Using a semaphore for this type of synchronization is using what is called a unilateral rendezvous. If the kernel supports counting semaphores, the semaphore would accumulate events that have not yet been processed. Note that more than one task can be waiting for the event to occur.

In this case, the kernel could signal the occurrence of the event either to: a the highest priority task waiting for the event to occur, or b the first task waiting for the event. Depending on the application, more than one ISR or task could signal the occurrence of the event. Two tasks can synchronize their activities by using two semaphores, as shown in Figure This is called a bilateral rendezvous.

A bilateral rendezvous is similar to a unilateral rendezvous except both tasks must synchronize with one another before proceeding. For example, two tasks are executing as shown in listing 2. When the first task reaches a certain point, it signals the second task L2. Similarly, when the second task reaches a certain point, it signals the first task L2. At this point, both tasks are synchronized with each other.

The task can be synchronized when any of the events have occurred. This is called disjunctive synchronization logical OR. A task can also be synchronized when all events have occurred. This is called conjunctive synchronization logical AND. Disjunctive and conjunctive synchronization are shown in Figure Common events can be used to signal multiple tasks, as shown in Figure Events are typically grouped.

Depending on the kernel, a group consists of 8, 16 or 32 events mostly bits, though. Tasks and ISRs can set or clear any event in a group. A task is resumed when all the events it requires are satisfied.

The evaluation of which task will be resumed is performed when a new set of events occurs i. This information transfer is called intertask communication. Information may be communicated between tasks in two ways: through global data or by sending messages. When using global variables, each task or ISR must ensure that it has exclusive access to the variables. If an ISR is involved, the only way to ensure exclusive access to the common variables is to disable interrupts. Note that a task can only communicate information to an ISR by using global variables.

A task is not aware when a global variable is changed by an ISR unless the ISR signals the task by using a semaphore or by having the task regularly poll the contents of the variable.

To correct this situation, you should consider using either a message mailbox or a message queue. A Message Mailbox, also called a message exchange, is typically a pointer size variable. Through a service provided by the kernel, a task or an ISR can deposit a message the pointer into this mailbox. Similarly, one or more tasks can receive messages through a service provided by the kernel.

Both the sending task and receiving task will agree as to what the pointer is actually pointing to. A waiting list is associated with each mailbox in case more than one task desires to receive messages through the mailbox. A task desiring to receive a message from an empty mailbox will be suspended and placed on the waiting list until a message is received.

Typically, the kernel will allow the task waiting for a message to specify a timeout. If a message is not received before the timeout expires, the requesting task is made ready-to-run and an error code indicating that a timeout has occurred is returned to it.

When a message is deposited into the mailbox, either the highest priority task waiting for the message is given the message called priority-based or the first task to request a message is given the message called First-In-First-Out, or FIFO.

Figure shows a task depositing a message into a mailbox. Note that the mailbox is represented graphically by an I-beam and the timeout is represented by an hourglass. The number next to the hourglass represents the number of clock ticks described later that the task will wait for a message to arrive.

Kernel services are typically provided to: a Initialize the contents of a mailbox. The mailbox may or may not initially contain a message. If the mailbox contains a message, the message is extracted from the mailbox. A return code is used to notify the caller about the outcome of the call. Message mailboxes can also be used to simulate binary semaphores.

A message in the mailbox indicates that the resource is available while an empty mailbox indicates that the resource is already in use by another task. A message queue is basically an array of mailboxes. Through a service provided by the kernel, a task or an ISR can deposit a message the pointer into a message queue.

Generally, the first message inserted in the queue will be the first message extracted from the queue FIFO. As with the mailbox, a waiting list is associated with each message queue in case more than one task is to receive messages through the queue.

A task desiring to receive a message from an empty queue will be suspended and placed on the waiting list until a message is received. If a message is not received before the timeout expires, the requesting task is made ready-to-run and an error code indicating a timeout occurred is returned to it. When a message is deposited into the queue, either the highest priority task or the first task to wait for the message will be given the message.

Note that the queue is represented graphically by a double I-beam. The 10 indicates the number of messages that can be accumulated in the queue. A 0 next to the hourglass indicates that the task will wait forever for a message to arrive. Kernel services are typically provided to: a Initialize the queue. The queue is always assumed to be empty after initialization.

If the queue contained a message, the message is extracted from the queue. When an interrupt is recognized, the CPU saves part or all of its context i. Interrupts allow a microprocessor to process events when they occur. This prevents the microprocessor from continuously polling an event to see if this event has occurred.

Microprocessors allow interrupts to be ignored and recognized through the use of two special instructions: disable interrupts and enable interrupts, respectively. In a real-time environment, interrupts should be disabled as little as possible. Disabling interrupts affects interrupt latency see section 2.

This means that while servicing an interrupt, the processor will recognize and service other more important interrupts as shown in Figure All real-time systems disable interrupts to manipulate critical sections of code and re-enable interrupts when the critical section has executed. The longer interrupts are disabled, the higher the interrupt latency. The interrupt response time accounts for all the overhead involved in handling an interrupt.

Typically, the processor's context CPU registers is saved on the stack before the user code is executed. For a non-preemptive kernel, the user ISR code is executed immediately after the processor's context is saved. For a preemptive kernel, a special function provided by the kernel needs to be called. This function notifies the kernel that an ISR is in progress and allows the kernel to keep track of interrupt nesting. A system's worst case interrupt response time is its only response.

For a preemptive kernel, interrupt recovery is more complex. Typically, a function provided by the kernel is called at the end of the ISR. If all interrupts have nested i. If a higher priority task is ready-to-run as a result of the ISR, this task is resumed. Note that, in this case, the interrupted task will be resumed only when it again becomes the highest priority task ready-to-run.

In the later case, the execution time is slightly longer because the kernel has to perform a context switch. This allows you to see the cost in execution time of switching context. If the ISR's code is the most important code that needs to run at any given time, then it could be as long as it needs to be. You should also consider whether the overhead involved in signaling a task is more than the processing of the interrupt. Signaling a task from an ISR i. If processing of your interrupt requires less than the time required to signal a task, you should consider processing the interrupt in the ISR itself and possibly enable interrupts to allow higher priority interrupts to be recognized and serviced.

Since the NMI cannot be disabled, interrupt latency, response, and recovery are minimal. The NMI is generally reserved for drastic measures such as saving important information during a power down.

If, however, your application doesn't have this requirement, you could use the NMI to service your most time-critical ISR. Equations 2. When you are servicing an NMI, you cannot use kernel services to signal a task because NMIs cannot be disabled to access critical sections of code. You can, however, still pass parameters to and from the NMI. Parameters passed must be global variables and the size of these variables must be read or written indivisibly, that is, not as separate byte read or write instructions.

NMIs can be disabled by adding external circuitry, as shown in Figure Interrupts are disabled by writing a 0 to an output port. You wouldn't want to disable interrupts to use kernel services, but you could use this feature to pass parameters i. Now, lets suppose that the NMI service routine needs to signal a task every 40 times it executes. In this case, the NMI service routine would generate a hardware interrupt through an output port i.

Since the NMI service routine typically has the highest priority and, interrupt nesting is typically not allowed while servicing the NMI ISR, the interrupt would not be recognized until the end of the NMI service routine. At the completion of the NMI service routine, the processor would be interrupted to service this hardware interrupt. As long as the task services the semaphore well within 6 mS, your deadline would be met.

Issues interrupt by writing to an output port. This interrupt can be viewed as the system's heartbeat. The time between interrupts is application specific and is generally between 10 and mS. The clock tick interrupt allows a kernel to delay tasks for an integral number of clock ticks and to provide timeouts when tasks are waiting for events to occur.

The faster the tick rate, the higher the overhead imposed on the system. All kernels allow tasks to be delayed for a certain number of clock ticks. The resolution of delayed tasks is 1 clock tick, however, this does not mean that its accuracy is 1 clock tick.

Figures through are timing diagrams showing a task delaying itself for 1 clock tick. The shaded areas indicate the execution time for each operation being performed. Note that the time for each operation varies to reflect typical processing, which would include loops and conditional statements i. Case 1 Figure shows a situation where higher priority tasks and ISRs execute prior to the task, which needs to delay for 1 tick.

As you can see, the task attempts to delay for 20 mS but because of its priority, actually executes at varying intervals. This will thus cause the execution of the task to jitter. Case 2 Figure shows a situation where the execution times of all higher-priority tasks and ISRs are slightly less than one tick. If the task delays itself just before a clock tick, the task will execute again almost immediately! Because of this, if you need to delay a task for at least 1 clock tick, you must specify one extra tick.

In other words, if you need to delay a task for at least 5 ticks, you must specify 6 ticks! Case 3 Figure shows a situation where the execution times of all higher-priority tasks and ISRs extend beyond one clock tick. In this case, the task that tries to delay for 1 tick will actually execute 2 ticks later! In this case, the task missed its deadline. This might be acceptable in some applications, but in most cases it isn't.

These situations exist with all real-time kernels. They are related to CPU processing load and possibly incorrect system design. Here are some possible solutions to these problems: a Increase the clock rate of your microprocessor. Regardless of what you do, jitter will always occur. With a multitasking kernel, things are quite different. To begin with, a kernel requires extra code space ROM. The size of the kernel depends on many factors.

Depending on the features provided by the kernel, you can expect anywhere from 1 Kbytes to Kbytes. A minimal kernel for an 8-bit CPU that provides only scheduling, context switching, semaphore management, delays, and timeouts should require about 1 to 3 Kbytes of code space. Because each task runs independently of the other, each task must be provided with its own stack area RAM.

As a designer, you must determine the stack requirement of each task as closely as possible this is sometimes a difficult undertaking. The stack size must not only account for the task requirements local variables, function calls, etc. Depending on the target processor and the kernel used, a separate stack can be used to handle all interrupt-level code.

This is a desirable feature because the stack requirement for each task can be substantially reduced. Conversely, some kernels require that all task stacks be the same size. All kernels require extra RAM to maintain internal variables, data structures, queues, etc.

Unless you have large amounts of RAM to work with, you will need to be careful about how you use the stack space. To reduce the amount of RAM needed in an application, you must be careful about how you use each task's stack for: a large arrays and structures declared locally to functions and ISRs b function i. The amount of extra ROM depends only on the size of the kernel, and the amount of RAM depends on the number of tasks in your system. The use of an RTOS simplifies the design process by splitting the application code into separate tasks.

With a preemptive RTOS, all time-critical events are handled as quickly and as efficiently as possible. An RTOS allow you to make better use of your resources by providing you with precious services such as semaphores, mailboxes, queues, time delays, timeouts, etc. The one factor I haven't mentioned so far is the cost associated with the use of a real-time kernel.

In some applications, cost is everything and would preclude you from even considering an RTOS. Products are available for 8-, , and bit microprocessors.

The RTOS vendor may also require royalties on a per-target-system basis. This is like buying a chip from the RTOS vendor that you include with each unit sold. The RTOS vendors call this silicon software. Return from int. The interrupt disable time is one of the most important specifications that a real-time kernel vendor can provide because it affects the responsiveness of your system to real-time events. Some compilers will allow you to insert in-line assembly language statements in your C source code.

This makes it quite easy to insert processor instructions to enable and disable interrupts. Other compilers will actually contain language extensions to enable and disable interrupts directly from C.

A task looks just like any other C function containing a return type and an argument but, it never returns. The return type must always be declared to be void L3. Alternatively, the task can delete itself upon completion as shown in Listing 3. The argument L3. You will notice that the argument is a pointer to a void.

This allows your application to pass just about any kind of data to your task. It is possible see Example 1 in Chapter 1 to create many identical tasks all using the same function or task body. For example, you could have 4 serial ports that are each managed by their own task.

However, the task code is actually identical. You can thus have up to 56 application tasks. The lower the priority number, the higher the priority of the task.

The priority number i. These two functions are explained in Chapter 4, Task Management. At any given time, a task can be in any one of five states. Tasks may be created before multitasking starts or dynamically by a running task.

When created by a task, if the created task has a higher priority than its creator, the created task is immediately given control of the CPU. A task can return itself or another task to the dormant state by calling OSTaskDel.

Multitasking is started by calling OSStart. Only one task can be running at any given time. A ready task will not run until all higher priority tasks are either placed in the wait state or are deleted.

The delayed task is made ready to run by OSTimeTick when the desired time delay expires see Section 3. When a task pends on an event, the next highest priority task is immediately given control of the CPU. The task is made ready when the event occurs. The occurrence of an event may be signaled by either another task or an ISR. The task thus enters the ISR state. The ISR may make one or more tasks ready to run by signaling one or more events. If a higher priority task is made ready to run by the ISR then the new highest priority task is resumed.

Otherwise, the interrupted task is resumed. When the task regains control of the CPU the task control block allows the task to resume execution exactly where it left off. Some commercial kernels assume that all stacks are the same size unless you write complex hooks.

This limitation wastes RAM when all tasks have different stack requirements, because the largest anticipated stack size has to be allocated for all tasks. You could create a data structure that contains the name of each task, keep track of the execution time of the task, the number of times a task has been switched-in and more see Example 3. Note that I decided to place this pointer immediately after the stack pointer in case you need to access this field from assembly language.

This makes calculating the offset from the beginning of the data structure easier. This allows you determine the amount of free stack space available for each stack. This means that if a stack contains entries and each entry is bit wide then the actual size of the stack is bytes. Similarly, a stack where entries are bit wide would contain bytes for the same entries. The stack only needs to be cleared if you intend to do stack checking.

The larger you stack, the longer it will take. This field is currently not used and has only been included for future expansion. A doubly linked list is used to permit an element in the chain to be quickly inserted or removed. OSTCBDly is used when a task needs to be delayed for a certain number of clock ticks or a task needs to pend for an event to occur with a timeout. In this case, this field contains the number of clock ticks that the task is allowed to wait for the event to occur.

When this value is zero the task is not delayed or has no timeout when waiting for an event. The values for these fields are computed when the task is created or when the task's priority is changed.

In addition to their value as references to the kernel, they are extremely detailed and highly readable design studies particularly useful for embedded systems students. While documenting the design and implementation of the kernel, the books also discuss many related development issues: how to adapt the kernel for a new microprocessor, how to install the kernel, and how to structure the applications that run on the kernel. These books are written for serious embedded systems programmers, consultants, hobbyists, and students interested in understanding how to use a real-time kernel.

Independent review by author and embedded systems expert Jack Ganssle from Embedded Systems Programming Magazine, January , p :. This version is more than a simple upgrade; it appears to be a total rewrite. Weighing in at pages it's a complete description of the RTOS, and about how to use it in your application. It has been ported to a vast number of microprocessors. Best of all, to me, is the code is written in an eminently clear and consistent fashion.

Want to teach people how to write clean code? Instead, Jean has added chapters and more material that gives a very easy-to-understand description of what is going on.

There are many more illustrations than in the previous volume. I like the fact that he has annotated the listings listings that demonstrate how to use the RTOS with numbers that refer to descriptions in the text. That speeds understanding of the concepts a lot. Highly recommended. Part II of each book describes practical, working applications for embedded medical devices built on popular microprocessors.

Each of the included examples feature hands-on working projects, which allow you to get your application running quickly. Together with the Renesas e2studio, the evaluation board provides everything necessary to get you up and running quickly, as well as a fun and educational experience, resulting in a high level of proficiency in a short time. This code is surely an indication that something is wrong because you are returning more memory blocks than you obtained using OSMemGet.

In other words, OSMutexAccept is non-blocking. Arguments pevent is a pointer to the mutex that guards the resource. This pointer returned to your application when the mutex is created [see OSMutexCreate ]. If the mutex is owned by another task, OSMutexAccept returns 0. Mutexes must be created before they are used. This function must not be called by an ISR. A mutex is used to gain exclusive access to a resource. Arguments prio is the priority inheritance priority PIP that is used when a high priority task attempts to acquire the mutex that is owned by a low priority task.

In this case, the priority of the low priority task is raised to the PIP until the resource is released. Returned Values A pointer to the event control block allocated to the mutex. You must make sure that prio has a higher priority than any of the tasks that use the mutex to access the resource.

For example, if three tasks of priority 20, 25, and 30 are going to use the mutex, then prio must be a number lower than In addition, there must not already be a task created at the specified priority.

This function is dangerous to use because multiple tasks could attempt to access a deleted mutex. Generally speaking, before you delete a mutex, you must first delete all the tasks that can access to mutex. Arguments pevent is a pointer to the mutex. This pointer is returned to your application when the mutex is created [see OSMutexCreate ]. You should use this call with care because other tasks might expect the presence of the mutex.

However, if the mutex is already owned by another task, OSMutexPend places the calling task in the wait list for the mutex. The task thus waits until the task that owns the mutex releases the mutex and thus the resource or until the specified timeout expires.

A timeout value of 0 indicates that the task desires to wait forever for the mutex. The timeout count starts being decremented on the next clock tick, which could potentially occur immediately. In other words, your code should hurry up and release the resource as quickly as possible. If the priority of the task that owns the mutex has been raised when a higher priority task attempts to acquire the mutex, the original task priority of the task is restored. If one more tasks are waiting for the mutex, the mutex is given to the highest priority task ready to run, and if so, a context switch is done to run the readied task.

If no task is waiting, for the mutex, the mutex value is simply set to available 0xFF. You cannot call this function from an ISR. OSMutexQuery allows you to determine whether any task is waiting on the mutex, how many tasks are waiting by counting the number of 1s in the.

In other words, OSQAccept is non-blocking. If a message is available, it is extracted from the queue and returned to your application. Arguments pevent is a pointer to the message queue from which the message is received.

This pointer is returned to your application when the message queue is created [see OSQCreate ]. Message queues must be created before they are used. A message queue allows tasks or ISRs to send pointer-sized variables messages to one or more tasks. The meaning of the message sent are application specific.

Arguments start is the base address of the message storage area. A message storage area is declared as an array of pointers to voids. Queues must be created before they are used.

This function is dangerous to use because multiple tasks could attempt to access a deleted queue. Generally speaking, before you delete a queue, you must first delete all the tasks that can access the queue.

Arguments pevent is a pointer to the queue. This pointer is returned to your application when the queue is created [see OSQCreate ]. You should use this call with care because other tasks might expect the presence of the queue. Interrupts are disabled when pended tasks are readied, which means that interrupt latency depends on the number of tasks that are waiting on the queue. This function takes the same amount of time to execute regardless of whether tasks are waiting on the queue and thus no messages present or the queue contains one or more messages.

Arguments pevent is a pointer to the message queue. The messages are sent to the task either by an ISR or by another task. The message received are pointer-sized variables, and their use is application specific. If at least one message is present at the queue when OSQPend is called, the message is retrieved and returned to the caller. If no message is present at the queue, OSQPend suspends the current task until either a message is received or a user-specified timeout expires.

Arguments pevent is a pointer to the queue form which the messages are received. If the message queue is full, an error code is returned to the caller.

In this case, OSQPost immediately returns to its caller, and the message is not placed in the queue. If any task is waiting for a message at the queue, the highest priority task receives the message.

If the task waiting for the message has a higher priority than the task sending the message, the higher priority task resumes, and the task sending the message is suspended; that is, a context switch occurs. Message queues are first-in first-out FIFO , which means that the first message sent is the first message received.

Arguments pevent is a pointer to the queue into which the message is deposited. You must never post a NULL pointer. By convention, a NULL pointer is not supposed to point to anything valid. The message is a pointer-sized variable, and its use is application specific. In this case, OSQPostFront immediately returns to its caller, and the message is not placed in the queue.

If the message queue is full, an error code is returned indicating that the queue is full. OSQPostOpt then immediately returns to its caller, and the message is not placed in the queue. In either case, scheduling occurs, and, if any of the tasks that receive the message have a higher priority than the task is posting the message, then the higher priority task is resumed, and the sending task is suspended.

In other words, it allows the message posted to be broadcasted to all tasks waiting on the queue. You must never post a NULL pointer to a queue. OSQQuery allows you to determine whether any tasks are waiting for messages at the queue, how many tasks are waitint by counting the number of 1s in the.

OSEventTbl[] field , how many messages are in the queue, and whar the message queue size is. OSQQuery also obtains the next message that is returned if the queue is not empty. However, interrupts are still recognized and serviced assuming interrupts are enabled. Scheduling is enabled when an equal number of OSSchedUnlock calls have been made. Because the scheduler is locked out, no other task is allowed to run, and your system will lock up.

In other words, OSSemAccept is non-blocking. Arguments pevent is a pointer to the semaphore that guards the resource. This pointer is returned to your application when the semaphore is created [see OSSemCreate ]. Returned Values When OSSemAccept is called and the semaphore value is greater than 0, the semaphore value is decremented, and the value of the semaphore before the decrement is returned to your application. If the semaphore value is 0 when OSSemAccept is called, the resource is not available, and 0 is returned to your application.

Semaphores must be created before they are used. A value of 0 indicates that a resource is not available or an event has not occurred. This function is dangerous to use because multiple tasks could attempt to access a deleted semaphore. Generally speaking, before you delete a semaphore, you must first delete all the tasks that can access the semaphore.

Arguments pevent is a pointer to the semaphore. In this case, all pending task are readied.. You should use this call with care because other tasks might expect the presence of the semaphore. Interrupts are disabled when pended tasks are readied, which means that interrupts latency depends on the number of tasks that are waiting on the semaphore. However, if the value of the semaphore is 0, OSSemPend places the calling task in the waiting list for the semaphore.

The task waits until a task or an ISR signals the semaphore or the specified timeout expires. A pended task that has been suspended with OSTaskSuspend can obtain the semaphore. A timeout value of 0 indicates that the task waits forever for the semaphore. If the semaphore value is 0 or more, it is incremented, and OSSemPost returns to its caller. If tasks are waiting for the semaphore to be signalled, OSSemPost removes the highest priority task pending for the semaphore from the waiting list and makes this task ready to run.

The scheduler is then called to determine if the awakened task is now the highest priority task ready to run. OSSemQuery allows you to determine whether any tasks are waiting on the semaphore and how many tasks are waiting by counting the number of 1s in the.

OSEventTbl[] field and obtains the semaphore count.



0コメント

  • 1000 / 1000