I present a demonstration of the technique used to create the two live 9/11 video composites, first Chopper 7, then Chopper 5.
Higher quality versions of the compositing demonstrations may be downloaded here:
Chopper 5
Chopper 7
No airplane crashed into either Twin Tower. The various videos which depict a plane entering a building, such as Naudet, CNN Ghostplane, Evan Fairbanks, Luc Courchesne, and Spiegel TV, show just that – a plane entering a wall. They are devoid of the crash physics we would expect from an aluminum aircraft interacting with a steel and concrete structure. The plane does not twist, bend, break, explode or slow down. A certain frame of the CNN video shows no damage to the wall, after the wing of the airplane has passed through.
The composi-traitors would never attempt a live composite showing the plane entering the tower. The timing, positioning and lighting are too critical. It is currently impossible to accomplish that effect in real-time with the precision needed to avoid detection. If the explosion went off even one frame before the plane hit, that would be very difficult to explain away.
However, the live shots do not show a plane entering the tower. In fact, the live shots were cleverly composed in such a way as to make them doable. Only 2 shots showed airplane images live – Chopper 5 and Chopper 7.
Objective
My present objective is to offer a theory of how the live 9/11 airplane videos were accomplished, and how one of them went wrong.
Concept and Methods
The live 9/11 composites were created using a real-time digital effects environment, such as Avid. To demonstrate, I will recreate the technique of the 9/11 composites using Adobe After Effects, because it is software that I own. After Effects is a layer-based compositor that does not operate in real time. However, the principles of compositing are exactly the same, regardless of software. I maintain that any parameters in After Effects which can be set one time, and then operate correctly without need for frame-by-frame adjustment, are parameters which will operate correctly in real-time, given a real-time environment such as Avid.
Here are listed features for Avid Symphony, which include:
# Real-time Chroma and Luma keys
# Real-time full-motion alpha keying
These are precisely the features which made the 9/11 real-time composites possible.
Shot Composition
Creating a convincing live composite of the 9/11 airplane event requires several important attributes that simplify the job enough to be doable.
1. Very brief appearance and disappearance of plane.
2. High contrast between sky and tower
3. Plane path is across sky only.
4. Plane will disappear across straight vertical edge.
Compositionally, Chopper 5 and Chopper 7 are nearly identical. Both shots are from a mechanically stabilized helicopter platform. In both shots the helicopter is drifting slowly and steadily to the left. In both shots there is no zooming, tilting, panning or focusing while the camera is on screen. Both show a plane entering from the right side of the screen. Both have a straight, vertical , high contrast tower edge and clear sky to the right. In both shots, the plane crosses in less than 1.5 seconds, and disappears behind the edge. In both shots the (exploding) south and east faces of WTC2 are hidden.
Both shots satisfy all of the requirements for a doable live composite. The only compositional difference is that in Chopper 7 and the plane appears to approach from an angle, in Chopper 5 the plane crosses perpendicular to the camera view.
Test Shots
Test shots of the towers are made ahead of time and studied for luminance keying suitability. They are made at about 9 a.m. on a clear day, the easiest (only) type of atmospheric conditions to reliably duplicate. It is shown that it is quite easy and effective to pull a key from this footage.
The positions of the helicopters are known via GPS. The goal is to compose the shots in a way that can be duplicated within a margin of error. With Chopper 5, the idea is that the left edge of WTC2 goes smack in the center. In Chopper 7, WTC2 is completely obscured by WTC1, which is positioned dead center.
The Airplane Layer
With the tower test shots in hand, the airplane layers are made. Each airplane must match the corresponding tower shot in color, and must appear to travel 550 mph.
The two airplanes could originate as real video, or be drawn on computer (CGI). There are advantages and disadvantages to each method.
Real video has the advantage of looking like video. Noise patterns will be authentic. Properly composed, the lighting will automatically be correct. A real plane is shot 59.94 progressive, simultaneously from two (or more) cameras. It is done at a secure airbase location, with camera positions corresponding to the positions of the test shots. Since it is extremely unlikely that a real Boeing 767 can fly 550 mph at low altitude, it flies 275 mph, and every second frame is removed, doubling the velocity on video.
CGI has the advantage of being able to create a single flight path in 3D space, then render a video of that flight from any virtual camera position. CGI would automatically create a plane on a transparent layer, with no need to remove any background.
Either way, color, lighting and motion blur are adjusted as needed to blend into the test shots. The airplane layer ends up as a 59.94 fps progressive video plane flying right-to-left on a transparency.
Since the airplane will be disappearing into a layer mask, there is a danger that the plane might run too long, travel too far, and escape out the back side of the mask. One safeguard against this is to have the plane slow down significantly after it has traveled far enough to enter the mask, and before it exits the mask. After doing test shots, it is possible to know the position of the tower within approximately 20 pixels.
Each frame of the airplane overlay is positioned. The plane flies across at full speed until it will surely be inside the mask, then it will slow down. It will slow down gradually, not all at once, in case any part of that deceleration is seen, it can be explained as the natural deceleration from impacting the tower.
Masking
It is necessary to remove the sky from the top layer. This is done by real time luminance keying (luma key).
Luma key makes transparent all pixels above an adjustable brightness threshold. A real edge on video is not perfectly sharp. The edges of the mask are made softer by adjustable degrees, with a parameter called “edge feathering”. The mask parameters are dialed in and tested during the minutes just before the actual event.
Synchronization
There is about a 1/2 second margin of error with respect to the explosions. As long as the explosion doesn’t begin before the plane crosses, and does begin no more than 1/2 second after the plane crosses, then each live video should look OK. This is good because we cannot know with frame-accurate precision when the explosion will become visible on the “impact” wall.
There is no margin of error with respect to synchronizing the two airplane shots to each other. They are synced using SMPTE time code. SMPTE clock stamps every frame of video with numbers for hour, minute, second and frame.
The explosion is set to go off at a known time on the wall clock - 9:03:11:00. The airplane layer videos have embedded time code beginning 20 frames before impact, 9:03:10:10. On that video frame, the airplane is just outside the picture, to the right.
Master time code is transmitted via satellite from the studio to both helicopters. Airplane layer is set to “receive external sync”. At 9:03:10:10, the airplane layer plays automatically.
Stabilization and Motion Tracking
The motion of the airplane must appear steady. Unstable camera motion would necessitate motion tracking of the towers. Real time motion tracking is unreliable in this situation, especially on Chopper 5, because the towers are in silhouette. There is not enough detail to reliably track them.
Without motion tracking, both live camera shots must be as stable as possible. A gyroscopically stabilized camera mount, such as Wescam is used. Helicopters cannot hold still. The best option is to fly very slowly, in the same direction as the airplane image, maintaining as constant and steady a speed as possible. Stability will not be perfect, but will be good enough. Deviations from perfect flight path will be small enough, and video quality will be poor enough, that the resulting unstable motion can be blamed on resolution and measurement error. True broadcast-quality copies of the final result must be kept top secret.
On-Board Compositing
For several reasons, the shots must be composited with an AVID system on-board the helicopters, as opposed to at the studio. Communications satellites are set up to relay standard NTSC video signals. The composite is created in a different format, then converted to NTSC. A raw camera shot could be intercepted and recorded by the wrong people. Even if they could guarantee a secure transmission, they would want to minimize the number of eyes that ever saw the raw shot.
American television conforms to the NTSC standard, which is interlaced video at 29.97 fps. Compositing is done using progressive images, not interlaced. The video is shot at 59.94 fps progressive, and the composite is done in this format. Each frame of the composited output is converted into one interlaced video field, thus becoming 29.97 fps NTSC. This NTSC signal is transmited back to the studio as an ordinary news helicopter feed.
On board requirements:
Avid system
59.94 fps progressive camera, Wescam mount
Small broadcast switcher
Receiving master SMPTE clock from the studio
A total of three video layers are required for the effect:
1. The raw twin tower camera shot.
2. The airplane flying across a transparency.
3. The twin tower shot, with sky masked out in real time.
At the correct time, the airplane flies across the screen, right to left, and disappears behind the tower. The engineer stops the plane layer. If it is stopped too soon, the plane freezes or disappears in mid air. If it is stopped too late, the nose of the airplane will poke out the back side of the mask.
In case of emergency, the last resort is to pull down the master fader, and transmit a black picture until the situation is rectified.
Report Card
Chopper 5
My present objective is to offer a theory of how the live 9/11 airplane videos were accomplished, and how one of them went wrong.
Concept and Methods
The live 9/11 composites were created using a real-time digital effects environment, such as Avid. To demonstrate, I will recreate the technique of the 9/11 composites using Adobe After Effects, because it is software that I own. After Effects is a layer-based compositor that does not operate in real time. However, the principles of compositing are exactly the same, regardless of software. I maintain that any parameters in After Effects which can be set one time, and then operate correctly without need for frame-by-frame adjustment, are parameters which will operate correctly in real-time, given a real-time environment such as Avid.
Here are listed features for Avid Symphony, which include:
# Real-time Chroma and Luma keys
# Real-time full-motion alpha keying
These are precisely the features which made the 9/11 real-time composites possible.
Shot Composition
Creating a convincing live composite of the 9/11 airplane event requires several important attributes that simplify the job enough to be doable.
1. Very brief appearance and disappearance of plane.
2. High contrast between sky and tower
3. Plane path is across sky only.
4. Plane will disappear across straight vertical edge.
5. All surfaces requiring airplane shadows are hidden.
6. Actual impact point is hidden.
7. Exploding walls are hidden.
8. Camera is as stable as possible.
9. No panning, tilting, zooming or focusing while airplane is on screen
6. Actual impact point is hidden.
7. Exploding walls are hidden.
8. Camera is as stable as possible.
9. No panning, tilting, zooming or focusing while airplane is on screen
Violating any one of these 9 requirements makes realistic live compositing impossible. How likely is it that all 9 happened by chance, on both shots?
Compositionally, Chopper 5 and Chopper 7 are nearly identical. Both shots are from a mechanically stabilized helicopter platform. In both shots the helicopter is drifting slowly and steadily to the left. In both shots there is no zooming, tilting, panning or focusing while the camera is on screen. Both show a plane entering from the right side of the screen. Both have a straight, vertical , high contrast tower edge and clear sky to the right. In both shots, the plane crosses in less than 1.5 seconds, and disappears behind the edge. In both shots the (exploding) south and east faces of WTC2 are hidden.
Both shots satisfy all of the requirements for a doable live composite. The only compositional difference is that in Chopper 7 and the plane appears to approach from an angle, in Chopper 5 the plane crosses perpendicular to the camera view.
Test Shots
Test shots of the towers are made ahead of time and studied for luminance keying suitability. They are made at about 9 a.m. on a clear day, the easiest (only) type of atmospheric conditions to reliably duplicate. It is shown that it is quite easy and effective to pull a key from this footage.
The positions of the helicopters are known via GPS. The goal is to compose the shots in a way that can be duplicated within a margin of error. With Chopper 5, the idea is that the left edge of WTC2 goes smack in the center. In Chopper 7, WTC2 is completely obscured by WTC1, which is positioned dead center.
The Airplane Layer
With the tower test shots in hand, the airplane layers are made. Each airplane must match the corresponding tower shot in color, and must appear to travel 550 mph.
The two airplanes could originate as real video, or be drawn on computer (CGI). There are advantages and disadvantages to each method.
Real video has the advantage of looking like video. Noise patterns will be authentic. Properly composed, the lighting will automatically be correct. A real plane is shot 59.94 progressive, simultaneously from two (or more) cameras. It is done at a secure airbase location, with camera positions corresponding to the positions of the test shots. Since it is extremely unlikely that a real Boeing 767 can fly 550 mph at low altitude, it flies 275 mph, and every second frame is removed, doubling the velocity on video.
CGI has the advantage of being able to create a single flight path in 3D space, then render a video of that flight from any virtual camera position. CGI would automatically create a plane on a transparent layer, with no need to remove any background.
Either way, color, lighting and motion blur are adjusted as needed to blend into the test shots. The airplane layer ends up as a 59.94 fps progressive video plane flying right-to-left on a transparency.
Since the airplane will be disappearing into a layer mask, there is a danger that the plane might run too long, travel too far, and escape out the back side of the mask. One safeguard against this is to have the plane slow down significantly after it has traveled far enough to enter the mask, and before it exits the mask. After doing test shots, it is possible to know the position of the tower within approximately 20 pixels.
Each frame of the airplane overlay is positioned. The plane flies across at full speed until it will surely be inside the mask, then it will slow down. It will slow down gradually, not all at once, in case any part of that deceleration is seen, it can be explained as the natural deceleration from impacting the tower.
Masking
It is necessary to remove the sky from the top layer. This is done by real time luminance keying (luma key).
Luma key makes transparent all pixels above an adjustable brightness threshold. A real edge on video is not perfectly sharp. The edges of the mask are made softer by adjustable degrees, with a parameter called “edge feathering”. The mask parameters are dialed in and tested during the minutes just before the actual event.
Synchronization
There is about a 1/2 second margin of error with respect to the explosions. As long as the explosion doesn’t begin before the plane crosses, and does begin no more than 1/2 second after the plane crosses, then each live video should look OK. This is good because we cannot know with frame-accurate precision when the explosion will become visible on the “impact” wall.
There is no margin of error with respect to synchronizing the two airplane shots to each other. They are synced using SMPTE time code. SMPTE clock stamps every frame of video with numbers for hour, minute, second and frame.
The explosion is set to go off at a known time on the wall clock - 9:03:11:00. The airplane layer videos have embedded time code beginning 20 frames before impact, 9:03:10:10. On that video frame, the airplane is just outside the picture, to the right.
Master time code is transmitted via satellite from the studio to both helicopters. Airplane layer is set to “receive external sync”. At 9:03:10:10, the airplane layer plays automatically.
Stabilization and Motion Tracking
The motion of the airplane must appear steady. Unstable camera motion would necessitate motion tracking of the towers. Real time motion tracking is unreliable in this situation, especially on Chopper 5, because the towers are in silhouette. There is not enough detail to reliably track them.
Without motion tracking, both live camera shots must be as stable as possible. A gyroscopically stabilized camera mount, such as Wescam is used. Helicopters cannot hold still. The best option is to fly very slowly, in the same direction as the airplane image, maintaining as constant and steady a speed as possible. Stability will not be perfect, but will be good enough. Deviations from perfect flight path will be small enough, and video quality will be poor enough, that the resulting unstable motion can be blamed on resolution and measurement error. True broadcast-quality copies of the final result must be kept top secret.
On-Board Compositing
For several reasons, the shots must be composited with an AVID system on-board the helicopters, as opposed to at the studio. Communications satellites are set up to relay standard NTSC video signals. The composite is created in a different format, then converted to NTSC. A raw camera shot could be intercepted and recorded by the wrong people. Even if they could guarantee a secure transmission, they would want to minimize the number of eyes that ever saw the raw shot.
American television conforms to the NTSC standard, which is interlaced video at 29.97 fps. Compositing is done using progressive images, not interlaced. The video is shot at 59.94 fps progressive, and the composite is done in this format. Each frame of the composited output is converted into one interlaced video field, thus becoming 29.97 fps NTSC. This NTSC signal is transmited back to the studio as an ordinary news helicopter feed.
On board requirements:
Avid system
59.94 fps progressive camera, Wescam mount
Small broadcast switcher
Receiving master SMPTE clock from the studio
A total of three video layers are required for the effect:
1. The raw twin tower camera shot.
2. The airplane flying across a transparency.
3. The twin tower shot, with sky masked out in real time.
At the correct time, the airplane flies across the screen, right to left, and disappears behind the tower. The engineer stops the plane layer. If it is stopped too soon, the plane freezes or disappears in mid air. If it is stopped too late, the nose of the airplane will poke out the back side of the mask.
In case of emergency, the last resort is to pull down the master fader, and transmit a black picture until the situation is rectified.
Report Card
Chopper 5
Grade: F
The crew on Chopper 5 blew it. They zoomed in too late, after the network was already broadcasting their shot. There is, of course, no plane in the wide shot, as it is quite impossible to realistically composite an airplane into a zooming shot.
The motion of the helicopter (and therefore the airplane) is stable enough to the eye. But careful measurements show that the airplane motion is slightly more stable on the unstabilized version of the footage.
The nose of the airplane pops out of the back of the layer mask.
The engineer faded to black (too late), stopped the plane layer, then faded back up again.
Chopper 7
Grade: A
The crew on Chopper 7 did a nice job.
The crew on Chopper 5 blew it. They zoomed in too late, after the network was already broadcasting their shot. There is, of course, no plane in the wide shot, as it is quite impossible to realistically composite an airplane into a zooming shot.
The motion of the helicopter (and therefore the airplane) is stable enough to the eye. But careful measurements show that the airplane motion is slightly more stable on the unstabilized version of the footage.
The nose of the airplane pops out of the back of the layer mask.
The engineer faded to black (too late), stopped the plane layer, then faded back up again.
Chopper 7
Grade: A
The crew on Chopper 7 did a nice job.