Case Study - Pedestrian Accident Caught on Video

Neal Carter and the Luminous Forensics team investigated and reconstructed a crash involving a pedestrian and a Chevrolet SUV. According to witnesses, the pedestrian was walking northbound on X Street in the crosswalk when the SUV turned left and impacted the pedestrian in the crosswalk near the roadway median (see the diagram below). The pedestrian was thrown into the air and, after landing, was dragged by the SUV until the vehicle came to a stop. The driver’s description of the crash was different than that of the witnesses. The driver stated that he observed the pedestrian on the corner, but as he made his turn the pedestrian darted out in front of his vehicle and the crash was unavoidable.

The team investigated and analyzed this accident in order to answer the following questions:

  1. Did the driver run a red light?

  2. Did the pedestrian have a walk signal?

  3. Was the pedestrian visible to the driver?

Luminous Forensics reconstructed this crash and produced animations to communicate his findings.

Accident cumba diagram.jpg

Investigation

As a part of our investigation, we inspected and documented an exemplar vehicle, the subject vehicle, and the subject intersection. We gathered police reports, police photos, witness statements, and electronic data from the SUV for use in the analysis. During the scene and vehicle inspections, Mr. Carter documented and mapped the scene with ground-level photographs, a Faro 3D laser scanner, drone photographs, Aeropoints, and Pix4D software.

Video Analysis

Two surveillance cameras captured critical moments of the accident. One perspective was from a security camera on a nearby residence and the other was from inside a convenience store across the street from the residence. The image below shows a diagram of the two perspectives. These cameras captured the motion of the SUV before the impact and after the impact, but did not provide coverage of the impact itself.

This video footage enabled the team to determine the sequence of events. As the images below show, the SUV exits Camera 1’s frame and enters Camera 2’s frame with the pedestrian clearly underneath the vehicle. From analysis of this footage, Mr. Carter determined that the SUV was presented with a permissive green light, meaning that the driver could turn left when the intersection and crosswalk were clear.

Mr. Carter and the Luminous team evaluated the sequence of the traffic lights. They determined the frame rates of the video footage in camera 1 by evaluating the number of frames that encompassed the visible yellow light phase. Mr. Carter then obtained the traffic signal timing, a document which showed the yellow light phase of that particular traffic signal lasting 4 seconds.

Picture14.jpg

The sequence of images below shows the light turn yellow at frame 3045 and then red at frame 3105 resulting in 60 frames in which the traffic light remains yellow.

Mr. Carter used this information to determine that the frame rate of the residential camera was 15 frames per second.

The movement of the SUV was analyzed using photogrammetry via a camera matching technique used in accident reconstruction. To execute this technique, Mr. Carter used the scan data he obtained during his scene inspection. The scan data was processed using two widely utilized and accepted techniques. One process involves drone photos in combination with GPS data points known as aerial scene mapping, and the other uses a Faro 3D scanner. The scan data was overlaid on frames of the video and the position and internal characteristics of the camera were varied until features in the scan data aligned with those features in the video frame. In this manner, the team successfully reconstructed the location, direction, and focal length of the two cameras. This allowed Mr. Carter, along with the camera frame rate information, to determine the position of the SUV through time as it turned left. The sequence of images below depicts the scan data at the scene and the photogrammetry process.

Photogrammetry Technique

Motion Sequence

Mr. Carter and his team conducted analysis of the timing of the SUV turn and pedestrian motion using a software package called PC-Crash. This software package utilizes physics-based equations to calculate the motion of vehicles caused by driver steering, braking, and acceleration inputs or by collision forces. The software allows the analyst to specify the vehicle and scene geometries and the roadway surface conditions, and then to simulate the motion of the vehicles. Mr. Carter simulated the Tahoe driving through the intersection and coming to rest in its documented rest position. The video below is a rendering of the motion of the vehicle from PC-Crash.

Visibility Testing

One objective in the reconstruction was to determine if the vehicle A-pillar obstructed the SUV driver’s view of the pedestrian. Mr. Carter conducted a visibility test using the actual SUV involved in the crash and a surrogate pedestrian dressed similar to the subject pedestrian. Using the motion sequence determined through the analysis, the subject intersection was closed and synchronized positions for the vehicle and SUV were marked on the roadway. The driver’s view was documented photographically from each position up to impact. The sequence of images below illustrates the visibility testing process. Mr. Carter determined through this process that the SUV A-pillar would not have obstructed the driver’s view of the crossing pedestrian.

Animation

The investigation and analysis above resulted in the development of a physics based animation. The intent of this animation is to explain Mr. Carter’s findings to a judge, jury and/or counsel.