New camera stabilization idea.

j-cal

Member
Hello NS,

I had this thought the other month of an in camera/post stabilizing system.

So imagine you take the red dragon shooting 6k and add sensors for every axis the camera can move. One for yaw, pitch, roll and so on. These sensors record all this information into metadata telling exactly where the camera moved in each shot. You then take this metadata and put into a stabilization software that can read this metadata. (no software exists out there that does this that i know of) Then using the 6k files you post stabilize the image using the metadata. Since you used the metadata there is no warping no motion tracking no nothing. It is an accurate stabilized image based off of what the sensors on the camera recorded.

You could do this with 4k to 1080p as well. I just used 6k to 4k as an example.

So if we shoot 6k with the ultimate goal of stabilized 4k footage we must have a way to monitor 4k while recording 6k. So if on our camera monitors we saw the 4k but were recording 6k that would allow for proper framing.

Now i think this idea makes a lot of sense with not to many down falls. Obviously you would still need to get a somewhat stable image to begin with but what am i missing. Why would this not work?

Thanks,

Jacob
 
It is an interesting idea for sure, but the software would be crazy to develop. It sounds like it would take years to go from start to finish
 
13093776:Forcillo said:
It is an interesting idea for sure, but the software would be crazy to develop. It sounds like it would take years to go from start to finish

I honestly cannot speak on the matter for sure, but it sounds like it would be a realitively straightforward program/ application. Simple input data, simple designated correction. Sounds to me like you gotta bring this idea to the right people.
 
This stabilization idea is essentially the anti-shake system that comes standard on most camcorders and some DSLRs if I'm not mistaken. Had to do something similar to this for work once, and although it sounds difficult, it can be broken down into a few (sort of) simple steps:

1) Sensors record spatial data (x, y, z points in space) based on a calibrated reference point (the ideally stabilized image, such that yaw, pitch and roll are all equal to 0).

2) Spatial data from sensors sent to processor, in which several algorithms (trig and linear algebra) are applied to output yaw, pitch, and roll of each frame. Each one of these variables can be written in terms of degrees (in reference to the calibration data).

3) Using the output variables yaw (x-axis rotation) and pitch (y-axis rotation), you can correct the not-stabilized images by multiplying the pixel-by-pixel values of each frame with what's called a rotation matrix (http://en.wikipedia.org/wiki/Rotation_matrix).

**Currently, I have no idea how you could correct for roll, given that each frame is 2-dimensional (there is no image data in the z-direction).

Voila, the outputted image is now a "stabilized" one (note the asterisk'd statement above). There are of course a million more things that could be said about each step, but those are the key points of stabilization. From there you could size down the image to whatever you like, but in theory the image would still be 6k.
 
13094658:blatt said:
This stabilization idea is essentially the anti-shake system that comes standard on most camcorders and some DSLRs if I'm not mistaken. Had to do something similar to this for work once, and although it sounds difficult, it can be broken down into a few (sort of) simple steps:

1) Sensors record spatial data (x, y, z points in space) based on a calibrated reference point (the ideally stabilized image, such that yaw, pitch and roll are all equal to 0).

2) Spatial data from sensors sent to processor, in which several algorithms (trig and linear algebra) are applied to output yaw, pitch, and roll of each frame. Each one of these variables can be written in terms of degrees (in reference to the calibration data).

3) Using the output variables yaw (x-axis rotation) and pitch (y-axis rotation), you can correct the not-stabilized images by multiplying the pixel-by-pixel values of each frame with what's called a rotation matrix (http://en.wikipedia.org/wiki/Rotation_matrix).

**Currently, I have no idea how you could correct for roll, given that each frame is 2-dimensional (there is no image data in the z-direction).

Voila, the outputted image is now a "stabilized" one (note the asterisk'd statement above). There are of course a million more things that could be said about each step, but those are the key points of stabilization. From there you could size down the image to whatever you like, but in theory the image would still be 6k.

Correct me if i am wrong here but most camcorders and dslrs use lens or sensor stabilization correct? I know canons system uses stabilization on the lens. And for example sonys uses sensor stabilization where it actually moves the sensor. I have never heard of this form of stabilization in camcorders. I am almost a 100% it is done either by sensor or lens stabilizing.
 
13094658:blatt said:
This stabilization idea is essentially the anti-shake system that comes standard on most camcorders and some DSLRs if I'm not mistaken. Had to do something similar to this for work once, and although it sounds difficult, it can be broken down into a few (sort of) simple steps:

1) Sensors record spatial data (x, y, z points in space) based on a calibrated reference point (the ideally stabilized image, such that yaw, pitch and roll are all equal to 0).

2) Spatial data from sensors sent to processor, in which several algorithms (trig and linear algebra) are applied to output yaw, pitch, and roll of each frame. Each one of these variables can be written in terms of degrees (in reference to the calibration data).

3) Using the output variables yaw (x-axis rotation) and pitch (y-axis rotation), you can correct the not-stabilized images by multiplying the pixel-by-pixel values of each frame with what's called a rotation matrix (http://en.wikipedia.org/wiki/Rotation_matrix).

**Currently, I have no idea how you could correct for roll, given that each frame is 2-dimensional (there is no image data in the z-direction).

Voila, the outputted image is now a "stabilized" one (note the asterisk'd statement above). There are of course a million more things that could be said about each step, but those are the key points of stabilization. From there you could size down the image to whatever you like, but in theory the image would still be 6k.

Did some more research, turns out your right. Alot of cameras do do this. Hmmmm i did not know this was a thing.
 
Back
Top