Derived from : https://assettocorsamods.net/threads/grand-prix-de-trois-rivières-road-layout.2124/ Imagine you have an onboard camera which use gyroscopic gimble that records stabilizer data. You could use these data to infere the road surface imperfection or the vibration sensed by the driver. That the subject of this discussion. From the refered post, I used video stabilizer software from race track videos to excerpt stabilisation data.
Here is a video of Michel Brousseau's F1600 on GP3R race track where the camera (with a lot of rolling shutter) was mounted on the car body. Stabilized in 2Degree-of-freedom (2DOF, horizontal and vertical) https://www.youtube.com/embed/omhEMRW9XE0 Here is a video of Jean-François Boyte (provincial kart championship winner) on Honda 200cc (open class with 16 HP) where the camera was mounted on his helmet. Stabilized in 3DOF (hor, vert, rot) plus lens correction, rendered at twice the real speed. https://www.youtube.com/embed/gV-8bbazeIU Just use parameters to slow down the speed at 0.5x
The analysis is based on pixel motion from video that has resolution and frame rate limits. Defining a point-of-view (POV) referential determine what kind of vibrations we look at. For example, stabilize an helmet ... or stabilize the horizon ... or stabilize the frame body ... or stabilize the tyre's hub, all these are possible POV. Repeated from the other post ------------------------------------ Track roughness shakes the tyre patch RLCy connected to wheel hub RLCy connected to the car frame RLCy connected to driver body RLCy connected to the helmet. NOTE: RLCy means "resistance, inductance, capacitance", the "y" to create an adverb 8) (more useless reading https://www.scielo.br/scielo.php?pid=S1678-58782011000300005&script=sci_arttext&tlng=pt) You can attach the cam to the car frame seen a) some horizon (as stabilizer aim) and b) the wheel hub (to get his vibration) OR, with the same cam position, a) aim some horizon and b) get vibration from the driver's helmet OR, put the cam onto the driver's helmet and stabilize some horizon to infere the cam vibration. Infering track roughness from helmet motion is not direct (see the 4 RCL links) but, if you want to simulate driver seat vibration and/or induced tyre slip/grip, you have more data then nothing. Concerning speed and bump, they are frequency related (see graph in the preceding useless reading) and, through 4 RCL links, subject to interpretation. Take the Nissan Micra race, these suspensions are too soft and too much unlink helmet-track. Take open wheel car and the helmet-track link is more pertinent. ---------------------------------- Data file content : Frame number : Frame number of the source video. For interlaced video, the frame number will have an 'A' or 'B' appended, depending on the field. Pan X : The number of horizontal panning pixels between (the middle line of) the previous frame and current frame. Pan Y : The number of vertical panning pixels between (the middle line of) the previous frame and current frame. Rotation : The number of degrees of rotation between (the middle line of) the previous frame and current frame. Zoom : The zoom factor between (the middle line of) the previous frame and current frame.
Another thing is, most of onboard videos show a pilot that drive in the "quasi-best race line" so you can't expect data from somewhere else.
I understand the concept, but where's the data? Which software performs this data extraction? Publicly available?
The data is not extracted from the Blue, it comes from differential video frame analysis. That's why the result depends on cam resolution and frame speed. I am used to work with these video and plugin software (open source). On the other side, the software compute but my physicist experience evaluates the meaning and the uncertainty of the results. Not recommended for every one.
I am ready to share how to do it. I think the preparation is more important then processing video therefore I am opened to help someone to prepare the "experience". Targetting meaningfull result need preparation in the video captation and also determine what are the limits of these data is fondamental to the data integration into the ACM engine.
The way to "integrate" this into AC wont be direct. The data will only serve for comparison or through a 3dsmax or blender script to create a Z axis reference to the track mesh.
Process of Dumoulin at GP3R Handheld camera Near constant slow speed Stabilizer aiming as possible to the horizon, clown includes 8(
Unable to post Isle of Man or F1 or Nissan Micra or Indy Car due to rights infringements on YouTube 8(