Daniel Phelps

Maker/Educator

Maker-Culture and You:How 3D Printing and Hands-On Learning Can Augment Traditional Pedagogy

Workshop and Webinar facilitated by Daniel Phelps, Performing & Fine Arts Adjunct Assistant Professor & Multimedia Production Specialist.

Originally recorded on April 3rd, 2014

Sponsored by the Center for Excellence in Teaching and Learning (CETL).

Can 3D printing help accomplish your course goals? This workshop willexplore how creative pedagogy using 3D printing & design can enhance your coursework, leading to increased interest and understanding in your subject matter. Learn by invention in York College’s Makerspace. This event will also be simulcast in Google Hangouts

Participation does not require Blackboard or any other technology.

An Ode to the “Professional” Editor #FCPX

This post has been cross listed by me from the York Comm Tech Blog at yorkcommtech.net.

On April 12, 2011, Final Cut Pro saw its first major redesign and update since 2001… Even earlier if you remember the program as Key Grip (the name of the program at Apple purchased as the foundation for FCP in 1999).

As I tuned into the twitter feeds of #FCP last night to experience the cumulative reaction of the 1700 people at the FCPUG users SuperMeet at NAB, I found myself to be outside of the “Reality Distortion Field” normally found in Apple keynote addresses. You see, unlike the next iPhone or the next iPad, Final Cut Pro has been theprogram that I have made my living for the past 10 years. It has been my lifeblood, a passion, and the one piece of software that I can truly say I’m an expert with. What makes me so comfortable with this program I know how to fix this bugger when it breaks. I know how to avoid problems with this clunker because of known bugs, and more importantly I can navigate this program with an efficiency that takes years to develop. It is another thing altogether to create something wonderful with this tool.

But as I utter these words, I realize that there are many other programs I rely on to do my job. Microsoft Word, any e-mail program, any operating system… These are all just cogs in a greater skill-set to do my job effectively and efficiently. By owning Microsoft Word I don’t call myself a professional writer. By knowing how to navigate WordPress, I’m not a professional blogger. And by owning a hammer and knowing that I can swing it, does not make me a carpenter. Anyone can give themselves a creative title until they have to build “it”. Only then, can someone can be called a “Professional”, whether it be a writer, a blogger, or Jesus.

With that said. If Jesus have been an editor, he would use Final Cut Pro X.

 

In my analysis of this announcement, I will look at this the changes to FCP from two different perspectives. One from an editors perspective, and the other from the role of an Educator and Multimedia Systems Administrator. Additionally, because this announcement was a “sneak peek” I will not recap the feature-set. For a complete list of changes to the program thus far, or to watch the Keynote address please visit the following links:

Feature List: http://aol.it/gHtcHz
Keynote Video: http://bit.ly/i9AFJx

From an Editors Perspective:

Forgoing the easy analysis of Final Cut Pro X as “iMovie on Steroids”, I truly believe Apple is trying to accomplish many goals. One of the obvious goals in the demo was efficiency disguised as making things “easier” for the editor. Every single new feature that was demoed is intended to make finding your media faster, and implementing decisions quicker.

As the demo for Magnetic Timeline and Compound Clips was happening, I was counting the steps or “clicks” that I would no longer have to do for insert editing or choosing b-roll/environmental. Those who perform many repetitive actions to nested sequence of over 700 clips know what I’m talking about. Those micro-tasks add up quick,  and after seeing FCPX “editing during ingest” of h.264, I was sold. Those features alone can save hours.

The Twitterverse might ask, “So what about the UI?”.. Well to that I say, “Meh”. UI’s change. They become more efficient, especially in Apple’s world. So what if it looks like iMovie. Spend a week with the program and learn how to be more efficient by starting over. You’re a professional, aren’t you? You’ve done it before with almost every other program you have. Grow up.

There is no way that this upgrade will make FCP less powerful. It provides the underpinnings to an exciting, powerful future where any format will “just work.” Isn’t that how we want all of our software to do… Just work with what we want? Apple is leveraging it’s core technologies (Open CL, Grand Central Dispatch, and Core Animation) to make things in FCP “just work.” From DSLR’s to flip cams, to legacy codecs (I’m looking at you DV).. These decisions [by Apple] will make editing easier for everyone because it strips the high-level of understanding away and makes the technology invisible.

Knowing how to fix compression problems with mpeg-2 will always come in handy, but if those problems are not there in the first place because the software “took care of it”, who’s to know, and who’s to care? Producers only care about the final product. As an editor/producer I look forward to a more efficient and seamless FCP experience so that I can concentrate on creativity and story, not codecs and metadata. Jesus can do my offline.

 

From a Systems Administrator/Educator Perspective:

If this version of Final Cut is not adopted by the professional community, FCP is dead as a Pro App. Lots of people will use it, but a toy it will be.

The decision to continue with Final Cut in the classroom will only be decided if it is taken seriously in the post-production world. Will the new price ($299.00) scare post-production houses away? Or will producers ask for FCPX by name, to keep costs down? The assumption 10 years ago was that because FCP was cheap and accessible, it would require lower post-production costs and “less-skilled” editors. Producers quickly realized that only the former turned out to be true. But the “non-professionals” still called themselves shooters and editors… with their DVX-100’s and XL-1’s, they pointed their Quicktime Export to YouTube, starting MANYcareers in the process. Consumers became producers with technology that was now more accessible.

I owe some of my career to my basement and my XL-1. My early knowledge and adoption gave me a leg up at the first of many small studios that I have worked. This (DV+FCP) was technology that caught salty Media-100 and Fast VM/Linear “Professional Editors” off-guard, and changed an industry in 8 years. I don’t believe this version (of FCP) will make waves like it did in the 2000’s… but you will see a new generation of filmmakers with DSLR’s and Macbooks creating stories with a different (read: more advanced) aesthetic than DV and our G4’s hurled at the industry 10 years ago.

So will I install FCPX in the classroom? Only if it benefits the students professionally.. and that depends ultimately on how you define a “professional”. This will take time. But given the current economic condition of education right now… There may be a clear choice. Adobe, and especially Avid, are more nimble and affordable than ever, but a 4G modem, a Macbook preloaded with FCPX, tied to a Canon T2i, for under 2K?.. Well that sounds like anyone, anywhere, can tell a professionalstory.

I say as an editor, use it immediately. If you like it add it to your toolset and move on. If it’s more efficient, tell producers that you can get it done faster in FCPX. Charge less, make more.

As a systems administrator. Wait a year. Look at the details. Call production houses and media outlets to see what their plans are. They will tell you their direction. The popular vote wins. (hint: most provide multiple NLE’s)

If you have any questions or comments, please feel free to let me know. You can also find me on the twits @danielphelps, or  email me at dphelps{at}york.cuny.edu

 

 

Dimensional Interpretation: 3D Technologies vs. Popular Culture

Technology will never succeed in recreating the richness that the five senses deliver our brain to interpret our world. The human race’s innate ability to capture an orthographic record of our experiences has progressed over the millennia in the pursuit of an image that represents the most accurate depiction available during that particular time period. Our ability record still and moving images has come a long way from early cave drawings. Just as cave drawings are no longer in vogue for a variety of reasons, there are many factors that dictate the popularity or acceptance of any given visual recording medium. Specifically, over the past 150 years, the technology to succeed traditional two-dimensional (2D) visual representation has been widely available to provide another dimension to the human visual record. These three dimensional (3D) technologies have provided us with the ability to represent the world in ways closer to the one decoded in our consciousness. Although the apparatus and technology for this 3D facsimile of our world has evolved since it was first conceived, civilization has yet to embrace it, as it has other popular forms of visual media. Are media consumers ready to finally embrace this medium? More importantly, will we have a choice?

Early philosophers and scientists have long known the science and theory of 3D vision. Specifically, the dominant physiological apparatus of binocular vision is what allows all sighted living beings to interpret the three dimensional world that we live in. The physician Galen noted this recognition of binocular vision as the major sense that lends itself to 3D vision. In the second century A.D. Galen notes in his writing On the Use of the Different Parts of the Human Body, “that a person standing near a column and observing first with the left eye and then with the right eye will see different portions of the background column” (qtd. in Zone 5). That is, each eye records a slightly different images from one another. These two images are interpreted by the brain to translate spatial depth and object dimension. Further study by Charles Wheatstone in 1838 would go on to define the characteristics of binocular vision and the techniques that would be required to recreate 3d vision from two separate 2D images. This “Binocular disparity is one of the most, if not the most effective depth cue.” (Pizlo 119).

Along with his hand drawings included in his paper, he also debuted a new invention dubbed the “stereoscope”, and was able to successfully prove that the recreation of three-dimensional space was possible using traditional and undiscovered orthographic techniques. The stereoscope was able to provide two separate images to each eye, effectively tricking the spatial recognition portion of the brain into interpreting depth. This form of 3D viewing of 2D media would not become popular, at the time, due to its difficulty to reproduce the 3D effect with hand drawn or hand painted images.

While stereoscopic hand drawing was successful within the scientific community in the early 1800’s, the general public would not embrace the technology until the mid 19th century when Wheatstone’s stereoscopic techniques would be combined with early photographic methods. With the introduction of stereo photography in conjunction with a cheaper mass-produced stereoscopic apparatus dubbed the “Holmes Stereoscope,” named after its inventor, Oliver Wendell Holmes. In Rosalind E. Krauss’ book “The Optical Unconsciousness,” the rise of the Holmes stereoscope as a popular form of viewing of 3D media had nothing to do with it’s technological superiority. Krauss states, “For the Wheatstone stereoscope, a product of physiological research in the 1830’s, was constructed to produce it’s experience of depth in a way that proved to be much more powerful than later devices such as the Holmes or Brewster stereoscope”(133). The popularity of the Holmes Stereoscope was due to its simplicity and affordability, not its technical advantage.

With the invention of the moving image in the late 19th century, and the development of the narrative storytelling model using motion pictures in the early days of the 20th century, the technology to capture and playback stereoscopic images quickly developed. Moving images quickly replaced their still counterparts for the dominant form of popular media. The popular stereoscope gave way to movies and the motion picture.

The recording of a stereo image, whether still or moving, has remained unchanged since the mid 1800’s. Essentially, two disparate images are taken, using two cameras or lenses. The placement of these imagers must be positioned, on average, 2.5 inches a part. Referred to as the inter-ocular or inter-axial distance, this particular distance is representative of the average gap between human eyes. This distance can be changed to manipulate the 3D effect during recording or playback, to achieve a more sensational 3D result. This adjustable width, straight ahead approach to shooting 3D is often referred to as “parallel” recording. Parallel recording is the simplest form of reproducing 3D images.

For added enhancement of the dept effect, the technique of “convergence” was introduced to further emulate the natural vision of the eyes. Convergence can simply be explained by the visual phenomena of the eyes “crossing” to view an object that is closer than an object that is far away. To understand this effect, all one must do is to hold an object about 4 inches from the eyes and examine the natural tendency of the eyes to cross to keep the item in view. Although convergence is used to further the effect of 3D, its use can also become a determent to the 3D effect because of subtle differences between the shape of eyes versus the shape of film or a video imager. Whereas the eye has a round “image plane” the technology that is used to record electronic images (ccd’s and film) are flat. This inherent flatness produces distortions in each eye that the brain has trouble resolving the intended 3D effect. The human eyes and brain usually correct for deviations in color, resolution and brightness. In the case of the “keystoning” effect created by the convergence method, the brain has trouble believing the effect and produces increased eye strain. Keystoning is one of many defects in the 3D recording process.

To talk about only the recording method of 3D images would only be half of what makes stereoscopy effective. The technology for viewing 3D motion pictures is just as essential as the capture method. Over the past 100 years, several forms of 3D projection and display have been developed and all are in use today. Essentially the technology uses the same principles that were developed by Wheatstone’s experiments in the mid 19th century; isolate recorded left and right images and deliver them simultaneously to each eye.

In order of development, the types of 3D playback can be broken down into 5 different technologies; anaglyph, polarized, active shutter, isolated stereoscopic, and auto stereoscopic. Anaglyph display technology is the cheapest and most inferior technology. Often using the classic red/blue glasses, this technology separates the left and right images into two colors, red for right and blue for left. The user wears red and blue glasses to filter out the opposite image when viewed on screen. This method creates inferior color representation due to the filters red/blue display method.

The polarized method uses two types of polarized lenses (circular or linear) to produce the left and right image, along with the opposite polarization in the form of glasses that filter the polarized light from the audience. Unlike anaglyph, polarized tech does not alter the color of the recorded material and produces a full color image.

Active shutter technology uses glasses that contain a battery to actively “block” out projected left and right images. The shutter method also produces a full color image and lends itself to being a simple, yet expensive, playback technology. This method is the current technology most widely available to home television sets.

Isolated stereoscopic technology is an updated form of the stereoscope that uses small displays with double images to produce the 3D effect. This cheap and pocketable 3D tech is often used in wearable glasses or cases for mobile devices.

The holy grail of 3D display technology is referred to as “auto stereoscopic”. This is the only method that uses no glasses and relies on shifted lens technology to deliver each image to both eyes. Essentially, the auto stereoscopic delivery system overlays small lenses on a transmissive screen such as a LCD. Each row of lenses sends one image left, and one image right. Images are only able to be viewed by one eye at a time, essentially blocking any overlapping that can occur with any other system. Auto stereoscopic screens produce a “sweet” spot for the user. That is, the intended audience must be an exact distance away from the screen for the 3D effect to work.

With all of the various forms of technologies available for 3D viewing developed over the years, why haven’t 3D movies, television and overall 3D storytelling become the dominant form of media? A story that is told in a format that more closely resembles human depth and vision has to be superior, right? Well, there are many reasons that 3D has not become consistently popular over the years.

Some would argue that the technology is not convincing enough, while others would argue that what the 3rd dimension adds to storytelling has not been widely accepted due to it’s improper use as a storytelling apparatus. Ray Zone writes in “Stereoscopic Cinema and the Origins of 3D Film.”

When the first publicly exhibited stereoscopic motion pictures were shown in 1915 at the Astor Theater in New York. Lynde Denig, a reviewer for

Moving Picture World, wrote. “These pictures would appeal first by reason of their novelty, then because of the wonderful effects obtained, and after

that, when they had become familiar, there would be the same old demand for an interesting story,” (qtd. in Zone 84).

During the early 20th century, 3D and 2D storytelling were in direct competition for an audience via stereoscopic cards and movies, respectively. Although the technology to playback 3D films was widely available, albeit inferior to the quality of similar 2D storytelling experience, the audience preferred the 2D familiarity. Eventually the 2D movie became the favored medium for storytelling due to its simplicity and believability. The “composite, synthetic nature of the stereoscopic image could never be fully effaced. An apparatus openly based on a principle of disparity, on a “binocular” body, and on an illusion patently derived from the binary referent of the stereoscopic card of paired images” (Crary, 133). 2D wasn’t better or worse than 3D because of its lack of realism, but because of its familiarity and ease of acceptance with the audience. The public was able to “see” the film from a distance rather than be a part of it due to a technical 3D “brain hack”. Although 3D storytelling persisted and at times flourished over the next 95 years, its popularity has never been able to match the 2D juggernaut that is modern cinema.

So what is the future of 3D storytelling? Well, in today’s 3D world, the technology to record and playback 3D has not changed much. What has changed is the proliferation and access to 3D tech, and more importantly, content.

Over the past 3 years, a resurgence in 3D has been adopted by many consumer companies and content producers. Phones and gaming systems are available with auto stereoscopic screens, TV’s are available to playback 3D with the assistance of active shutter technology, universal standards have been set, and the number of 3D capable theatres has increased world wide to over 7,000. The consumer push of 3D technology is only going to increase as tech companies encourage consumers to purchase the latest and greatest media devices. James Cameron agrees that consumer televisions are the future, but are lacking in one key area, content.

“We’re going to have 3D TVs all around us … and we’re going to need thousands of hours of sports, comedy and music and all kinds of entertainment,”(qtd. Herskovitz, and Lewis 1-1).

 

If visual 3D is to be finally accepted in our society, I believe that the driving force will not be the technology, but the quality of the content created for the 3D apparatus. Current technology is often seen as a gimmick or hook to drive the media consumer to devour (and pay) for the content. With the different 3D technologies being pushed upon us without the content to support it, it’s the content that will ultimately drive the embracing of the medium. Movies like “Avatar” that shun the traditional spectacle that 3D has been seen as in the past, the 3rd dimension will become another storytelling device, much like computer generated graphics have changed modern storytelling. 3D will have to be seen and used by storytellers as not a device to sell ticket or gadgets, but as a way to deliver depth and further understanding of the story, game, or user interface. Stereographer Jeanne Guillot’s dissertation entitled, “Is 3D Cinema Necessarily Spectacular?” Goes on to say:

This is the reason why I feel rather confident about the future of 3D cinema. I believe it will spark the curiosity and certainly the creativity of a number of directors, who will find ways of bringing this format into new realms. Stereoscopy is too rich a medium to remain confined to a restricted realm. (74)

 

3D has not changed much in the past 150 years. From still photography to 3D on mobile devices, the technology of 3D has never driven its adoption. The sensation of depth that 3D gives to viewers is no more than another storytelling tool. I believe that what will finally push this cinematic device over the tipping point into widespread acceptance, will be the quality of the content using new cinematic techniques. A new breed of storytellers that have unprecedented and universal access to the technology will develop 3D into a new art form. One that is accepted by the masses not for its “wow” factor, but for the feeling it yields to the story, character, or theme presented before them.

 

Works Cited

Crary, Jonathan. Suspensions of Perception: Attention, Spectacle, and Modern Culture. Cambridge, Mass.: MIT, 1999. Print.

Crary, Jonathan. Techniques of the Observer: on Vision and Modernity in the 19th Century. Cambridge: MIT, 1992. Print.

Guillot, Jeanne. “Is 3D Cinema Necessarily Spectacular?” French Film Festival, Richmond Virginia. La Fémis, 01092009. Web. 1 May 2010. <http://www.frenchfilm.vcu.edu/2010/pdf/Version%20final%20de%20la %20these%20en%20anglais.pdf>.

Krauss, Rosalind E. The Optical Unconscious. Cambridge, Mass.: MIT, 1993. Print.

Pizlo, Zygmunt. 3D Shape: Its Unique Place in Visual Perception. Cambridge: MIT, 2008. Print.

Zone, Ray. Stereoscopic Cinema & the Origins of 3-D Film, 1838-1952. Lexington, Ky: University of Kentucky, 2007. Print.

Herskovitz, Jon, and Chris Lewis. “Avatar’s James Cameron urges producers to embrace 3D TV.” Reuters 13 May 2010: 1-1. Web. 14 May 2010. <http://uk.reuters.com/article/idUKTRE64C1CE20100513?feedType=RSS&f eedName=technologyNews&utm_source=feedburner&utm_medium=feed&ut m_campaign=Feed:+reuters/UKTechnologyNews+(News+/+UK+/+Technolo gy+News)&utm_content=Google+Reader>.

 

 

DIY 3D Rig

This 3D rig is the 3rd generation of mounts that I have built for various cameras. It uses interchangeable cameras and custom mounts for a variety of uses. I currently have 2 HD cameras, and 2 Standard definition cameras to use in this rig. It is upgradeable, so as new cameras are introduced, a simple mount can be created to accept the new cameras.

The HD version consists of two Kodak Zi8’s. The cameras themselves can be found for around $130.00. They feature a remote for sync, 1080p recording, 720p 60fps recording, and a 3.5mm audio in port. These cameras have proved to be an amazing addition to this rig. It does very well in low light, ant the 60fps setting is a dream for 3D. Both have HDMI out and usb ports built into the camera itself, so viewing and uploading is a breeze. I’ve also attached a Audio-Technica powered shotgun mic as well. In the future I hope to add another mic for 4-channel surround recording.

I am currently using this system for low budget 3D shoots throughout the city. It is also a great compliment to my Large format 3D camera also appearing in this blog.

Stereoscopy for the 21st century The iPhone 3D Viewer

 Description of Project:

I have built three cheap and easy devices to produce, and display, 3D stereoscopic moving images. All of these devices use 19th century techniques  (Stereoscopy) in conjunction with ubiquitous 21st century consumer hardware (Apple Inc.’s iPhone).

At This Stage, I intend to develop and produce:

  1. An economical paper version of my iPhone Stereoscope for distribution to view my works on the iPhone. (Relationships with an established 3D paper glass company have been established. American Paper Optics, LLC, www.3dglassesonline.com)
  2. Create 3D works related to the field of nature conservation. Due to the proliferation of the iPhone in the urban environment, I feel that the nature of 3D would benefit those lacking awareness of the world outside the city.
  3. Distribute these 3D works via a custom mage iPhone application via the iPhone SDK, and the iPhone’s own built in YouTube application.
  4. Encourage and creation of an online community for user submission of works online, to be viewed on my simple device.

Narrative:

My device and distribution method, uses networked portable devices as a playback medium. Although, any handheld device capable of producing a moving image would be able to reproduce a stereoscopic effect, I have chosen the iPhone as the playback device to mock-up for this project because of its network connectivity, popularity by tech enthusiasts, and its ease of use by the consumer. As delivery medium changes for the consumer, the basic idea of stereoscopy remains the same as it has for 120 years… Look at disparate images through a viewer to create an added dimension and heightened reality.

To better develop this idea of creating modern stereograph equipment for enthusiasts, I have broken this project down into three parts:

Recording: I have built two devices capable of producing a 3-D image using standard consumer video cameras.

The large device was produced to test the use of my mirror concept in creating the 2 disparate images seen in stereoscopic photography. In addition to saving money by only using only one video camera to produce a 3D image, this device provides a challenge to the “Do it Yourself” enthusiast. This device requires little editing to the recorded image, but is more difficult to build. Material costs are a little over $40.00 and can use any camera that is capable of producing a 16×9 (widescreen) image, has manual focus, and can record in a progressive (30p) frame rate. In real world testing, 3D quality of this device is acceptable.

 

The smaller and more simplistic device uses two cameras to record a stereoscopic image. This design incorporates the more traditional approach to stereo photography and differs from the above device due to its lack of mirrors and build complexity. The device is cheaper and simpler to build but more expensive in its total cost due to its use of dual cameras. In addition, because twoseparate sources (cameras) are used, as opposed to one with the larger device, more editing time is needed to prepare the images for playback on a portable device. In real world testing, 3d images from this device are excellent.

Both devices produce stereoscopic images that can be played back on a portable device. Although they produce the images with a varying degree of accuracy and affordability, the devices are useless without a popular playback device. I believe that with technological advancements, and the popularity of 3D in the future, recording of digital stereo images will become easier to produce and playback.

 

Playback: I have used a simple pre-fabricated “Pocket Stereoscope” to attach to the iPhone. This device provides the eyes, and the brain, with the cohering needed to produce the stereoscopic effect required for adding dimension to a 2D image.

This stereoscope provides the correct focal length for the iPhone and does not require adjustment for those who dont use glasses. To accommodate the different focal lengths for people that do require glasses, a traditional focal length adjuster is needed for the next version of the playback device.

The intent is to create a cheap (read 10 cent) version of this viewer similar to the way traditional 3D glasses are made today.

The iPhone is not the only device that can potentially play back a stereograph. These same consumer lenses could be applied to other mobile phones, iPods, Portable DVD Players, PSPs, and any other device with a moderate pixel density that is capable of displaying full motion video.

Distribution: The iPhone was chosen for this project because of its integration with YouTube. With the built in iPhone application, users can share 3D videos and experience other videos recorded from other users around the world. YouTube also eases the burden of creating videos for any device. The stereoscope user has access to a diverse collection of videos without actually owning a 3D camera of their own. This networked use is the heart of the system and provides something that Holmes’ original stereoscope system did not, near instant 3D viewing of anything on the planet nearly anywhere on the planet.

 

A device with network connectivity has a big advantage over one that uses a physical format (i.e. DVD, Flash media, Hard Disks) because of the amount of video that can be made available to the user. Distribution outlets will grow as all information devices are networked.

 

Other uses for these designs: To encourage popularity of these devices one could distribute designs under a creative commons license. This method could spur the “DIY” community to embrace and improve on the technology. Both design concepts (camera and playback) could be improved upon and applied to current and new digital technology. I also believe because of the ease of networked distribution, users would create groups or “pools” of videos for their community, and the world, to enjoy and experiment with.

 

Instructions for use:

Located within the link below, you will find sample footage shot with the 3D devices in addition to images of the devices being created.

The video is formatted for use on the iPhone or and QuickTime device. It can be uploaded to the iPhone through iTunes and used with stereoscopic glasses, or a pocket stereoscope (not provided) at the correct focal length.

You can also view sample video created by the recording devices directly from the iPhone Youtube Application located on the Home screen. If you have access to an iPhone, please do a search for “Phelps iPhone 3D Test”. Or go to the link below:

* Please note that streaming the “Phelps iPhone 3D Test” sample video will require a WiFi connection to be viewed at full quality. If viewed while connected to the cellular network, the video will be degraded immensely. Although… the effect of highly compressed 3D footage is quite spectacular. Various degrees of compression have been applied to the test footage as a way measure its effects on the perception of depth.

(Dogma)[Rorschach] – (Painting 2009)

Project Abstract:

When conjuring up this idea for (Dogma)[Rorschach], I wanted to take in account some of the topics that I had been thinking about over the past few months and a few of themes stood out:

1. The interaction popular culture media has with each other and how we only use and discern what we know, while throwing out the rest

2. The definition of manufacture of consent

3. Control of public opinion is a means to controlling public behavior.

4. Fiction and factious personalities is what the public understands.

 

Obviously, some of these themes are pulled from my recent reading of Lippman’s Public Opinion. But I can’t help but think that my there have been other influences from the various forms of media that I’ve been consuming at the same time. 

I found an amazing amount of similarity between Public Opinion by Walter Lippman, and The Watchmen by Alan Moore and Dave Gibbons.

Are these two readings similar, or was I just projecting my recent thoughts, fascinations, and curiosities about society on a graphic novel?

In addition to these two pieces of work interacting with each other, I found myself thinking about the influence of one of the BIGGEST influencers of public opinion of all time, religious dogma.

How does society see itself in the writings of religious documents?

What you see at the top of the page is the result of this line of questioning.  (Dogma)[Rorschach]

 

Quotes referenced/used:

Quran

Fear God and he will give you knowledge.

Believers, Jews, Sabaeans or Christians – whoever believes in God and the Last Day and does what is right – shall have nothing to fear or regret.

O you who believe! seek assistance through patience and prayer; surely Allah is with the patient.

He deserves paradise who makes his companions laugh.

God is with those who persevere.

The ink of the scholar and the blood of a martyr are of equal value in heaven

What God writes on your forehead you will become

Attend constantly to prayers and to the middle prayer and stand up truly obedient to Allah.

He said: The prayer of you both has indeed been accepted, therefore continue in the right way and do not follow the path of those who do not know.

 

Bible

Hatred stirs up strife, but love covers all sins.

Trust in the Lord with all your heart  and lean not on your own understanding.

He who has pity on the poor lends to the Lord, and He will pay back what he has given.

In all your ways acknowledge Him, and He shall direct your paths.

So Jesus answered and said to them, “Have faith in God.

Only fear the Lord, and serve Him in truth with all your heart; for consider what great things He has done for you.”

Be anxious for nothing, but in everything by prayer and supplication, with thanksgiving, let your requests be made known to God;

Every good gift and every perfect gift is from above, and comes down from the Father of lights, with whom there is no variation or shadow of turning.

Moreover, as for me, far be it from me that I should sin against the Lord in ceasing to pray for you; but I will teach you the good and the right way.