Monitoring Vehicle Occupants

Visual Monitoring of Driver and Passenger Control Panel Interactions

Researchers

Toby Perrett and Prof. Majid Mirmehdi

Overview

Advances in vehicular technology have resulted in more controls being incorporated in cabin designs. We present a system to determine which vehicle occupant is interacting with a control on the centre console when it is activated, enabling the full use of dual-view touchscreens and the removal of duplicate controls. The proposed method relies on a background subtraction algorithm incorporating information from a superpixel segmentation stage. A manifold generated via the diffusion maps process handles the large variation in hand shapes, along with determining which part of the hand interacts with controls for a given gesture. We demonstrate superior results compared to other approaches on a challenging dataset.

Examples

Some example interactions with the dashboard of a Range Rover using near infra-red illumination and a near infra-red pass filter:

Some sample paths through a 3D manifold. The top row of images correspond to a clockwise dial turn. The middle row corresponds to a button press with the index finger, and the bottom shows how finer details such as a thumb extending can be determined:

manifoldpaths3 

References

This work has been accepted for publication in IEEE Transactions on Intelligent Transportation Systems. It is open access and can be downloaded from here:

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7486959

Automated Driver Assistance Systems

Real Time Detection and Recognition of Road Traffic Signs

Researchers

Dr. Jack Greenhalgh and Prof. Majid Mirmehdi

Overview

We researched automatic detection and recognition of text in traffic signs. Scene structure is used to define search regions within the image, in which traffic sign candidates are then found. Maximally stable extremal regions (MSER) and hue, saturation, value (HSV) colour thresholding are used to locate a large number of candidates, which are then reduced by applying constraints based on temporal and structural information. A recognition stage interprets the text contained within detected candidate regions. Individual text characters are detected as MSERs and grouped into lines before being interpreted using optical character recogntion (OCR). Recognition accuracy is vastly improved through the temporal fusion of text results across consecutive frames.

Publications

Jack Greenhalgh, Majid Mirmehdi, Detection and Recognition of Painted Road Markings. 4th International Conference on Pattern Recognition Applications and Methods, January 2015, Lisbon, Portugal. [pdf]

Jack Greenhalgh, Majid Mirmehdi, Recognizing Text-Based Traffic Signs. Transactions on Intelligent Transportation Systems, 16 (3), 1360-1369, 2015 [pdf]

Jack Greenhalgh, Majid Mirmehdi, Automatic Detection and Recognition of Symbols and Text on the Road Surface, Pattern Recognition: Applications and Methods, 124-140, 2015

Jack Greenhalgh, Majid Mirmehdi, Real Time Detection and Recognition of Road Traffic Signs. Transactions on Intelligent Transportation Systems, Vol.13, no.4, pp.1498-1506, Dec.2012 [pdf]

Jack Greenhalgh, Majid Mirmehdi, Traffic Sign Recognition Using MSER and Random Forests. 20th European Signal Processing Conference, pages 1935-1939. EURASIP, August 2012, Bucharest, Romania. [pdf]

 

Data

Here is some data for the detction and recognition of text-based road signs. This dataset consists of 9 video sequences, with a total of 23,130 frames, at a resolution of 1920 X 1088 pixels. Calibration parameters for the camera used to capture the data are also provided.

https://www.cs.bris.ac.uk/home/greenhal/textdataset.html