Please report new issues athttps://github.com/DOCGroup
Me and Irfan have been discussing about RTCORBA over the past one week or so. Irfan has identified areas that needs attention in ACE+TAO so that priority inversion can be avoided. We decided to document this separately so that it is a single place source of this information. The following observation is more relevant for client-propogated policy model, but they should be useful in general. 1. The connection cache has no notion of priority or banding or whatever. If a thread of higher priority sets up a connection to a remote endpoint, which is served by a thread of a priority that is changed from the client-propagated model, we want to be sure only the highest priority thread accesses the same connection again. With the reactor and the connection cache not having the priority information and with the client-propogated model, we seem to be seeing quite a few graphs that seem funny. 2. As documented in 1031, Reactor (esp. the TP) needs to have a notion of priority of the endpoints once the connection is setup. 3. We may have to disable the optimization in the reactor event selection mechanism for RTCORBA. I opening Bug 567 again. The reason is this, if a thread during select () receives 4 events, the reactor tries to dispatch all the events before calling select () again. (this would not be the case if the reactors internal state changed). If an event appeared in the endpoint of higher priority as soon as the first call to select () returns, the Reactor is not going to look at the event till all the four events are dispatched. This could lead to potential priority inversion from within the reactor. We may have to allow every thread to call select () and prioritize dispatching. This is going to affect our performance. This needs to be done only when RTCORBA is going to be used. We may have to strategise the reactor for this case
I wantedly skipped one point this morning when I was writing this report and here it is 4. We may have to disable single read optimizations for RTCORBA cases. The reason is as follows. If we use Single Read optimization, we have a possibility of reading more than one message and queueing them up. We also send a notify () to the reactor. This is where the problem starts setting in. If the reactor needs to process the notify () call it needs to read the message an ascertain the priority of the handle. If it is of lower priority in a set of events, we need to store the message (notify () message) somewhere. Doing that is going to be expensive. What we are essentially doing by sending notify () messages is that, we are putting messages of varying priorities in FIFO order. This may not be acceptable. Easier way around this is to disable single read optimization for RTCORBA. The we would do two reads, one for the GIOP header and the other for the rest of the message. This needs to be strategised into the incoming code path.
Assigning the bug to myself. Hopefully I will find time to work on this someday .
Accepting this one
Assigning it to Irfan, since he is working on this. Irfan please do the needful.
Accepted
to generic pool