The automotive driver’s reaction prediction model predicts the automotive driver’s reaction to a road scenario like slow traffic, curvy or slippery roads, dash board gauges like speedometer or tell-tales or particular radio stations, in-vehicle apps, GPS device info.The model will be based on the various information which is available to a driver while they are driving on the road and the corresponding reaction of the driver to that scenario captured by various sensors and create an adaptive model to predict and classify drivers behavior.
The design of the model will be centered around deep learning of sequence prediction primarily from video captured by an onboard camera.Followed by facial expression detection, pattern matching, and mapping. The data that will be captured will have different patterns and there will be a need to make different assumptions about the time dependencies and corresponding reactions captured by the camera.To make the deep learning more robust output of different patterns of action classification will be effective to analyze the various parameters of the model.
These will give us an insight into the state of the observation model but the primary problem at this stage will be to understand the instantaneous noise which is unpredictable in the current model like for example driver might express shock due to a personal conversation with the co-passengers but in the onboard video captured the expression of shock might seem on viewing a particular warning sign or telltale warning. Parallelly there will be a need to predict the state based on the variation between the current observation and the previous prediction as captured by the model. Considering the nonlinearity in the various scenarios the current state prediction will have to be computed based on the previous prediction and current observation captured and the current variation.At this stage, we also need to take account of the error in the prediction which has to be calculated iteratively for every new observation.Thus essentially there will be two steps which will be used during our execution on a sequence of observations captured by the camera and predict the response of the driver and then update various parameters in our model for the next iteration.
This entire steps described above might be applicable to a single expression versus single scenario and there will be a complex mixture of various expression of the driver and various scenarios and tell tales.Thus these will have to be handled in several stages of the prediction model or rather an array of prediction models.
- Drivers reactions are not linear and very complex which cannot be predicted by a single model
- Information availability and to achieve linearity in the input to all types of driver model
- Data Fusion from various sensors and data classification
- Generalization of the predicted model