March 16, 2012

2012 CES and Looking Forward to the 2012 NAB Show

I have a 2012 CES Show five minute documentation video here that highlights key technologies/trends that are especially relevant to Media Professionals.

The most overused buzz words at this years show were “Smart”, “Cloud” and “Ecosystem.” It seemed like everyone was pitching some sort smart media systems, claiming to have an ecosystem of seamless sharing of content between all of your home, vehicle and mobile devices via their cloud services. This hype stems from an all out war for control of your entire media ecosystem. CE manufacturers want to provide you with their integrative solutions and gateways to then curate of all of your content consumption across all of your devices/platforms. Providing consumers with “anything, anytime, anywhere” is the promise… but it has yet to be delivered.

One Achilles heel in all this content/technology ubiquity is the issue of DRM (digital rights management). The studios and CE companies have a dismal track record at devising and employing workable DRM. The numerous attempts at DRM over the past thirty years have done little more than inconvenience the bad guys, who inevitably release hacks and workarounds within days. Meanwhile, poorly designed DRM schemes can penalize paying customers to the point where they too start using hacks to circumvent the ill-conceived DRM that needlessly interferes with their content consumption flow.

4K video resolution is beginning to proliferate in the consumer realm, with JVC introducing a $5000 4K camcorder, the GY-HMQ10.

And there was the buzz about the new the 55” OLED displays, especially Samsung's, as well as Samsung’s "Smart Evolution" line of Smart TVs (including prototype with gesture control, voice control and with facial recognition).

And Moving Forward…

My Feb 17, 2011 post on this blog entitled "When AR Gets Serious" I described how lightweight flexible “smart” glasses will provide a convenient next-generation platform for mobile personal computing, once the technology is appropriately miniaturized, and the display configuration is worked out. Google appears to be getting closer to bringing this technology to the mass market with the recent announcement of an Android-powered Google Glasses product, to be available later this year. Last year during a small Augmented Reality sales event at Total Immersion's offices in Los Angeles, Google gave an overview of how their work on Google Goggles in their X Offices would eventually lead to a consumer AR glasses product. I still predict, as I did in February 2011, that wearing some form of smart glasses will soon become at least as ubiquitous as wearing ear buds. In fact probably more of a necessity... as carrying a smart device is today. And then it will be on to the embedding of such technologies.

In my 2012 CES video I looked at the explosion of apps and peripherals for tablets and smart phones at… 80,000 square feet of them in the North Hall “iPavilion.” Expect to see increasing growth in novel apps and outboard interfaces that leverage smart devises. One example is how retailers are adapting to shoppers who increasingly prefer interact with their iPads rather than with store clerks. Nordstrom, Macys and C Wonder are among firms offering apps (via store-wide wifi) to enhance the in-store shopping experience (NYTimes 3/9/12).

Expect to see an increase in 4K technologies foisted upon consumers, especially at the 2013 CES. 4K is already widely used in professional film/video workflows, so expect the 2012 NAB Show to have a variety of new and improved 4K hardware and software solutions. What about the the future of 8K, as in the NHK prototype system system shown by Sharp at 2012 CES? It was stunning... the best moving pictures I've ever seen. But not likely to be seen anywhere in the consumer realm in the near, or not-so-near future. But I do expect it as a potential next level of "super-digital" for the more exclusive high-end digital cinema installations down the road. We'll see how it unfolds at the Technology Summit on Cinema conference April 14 & 15 at NAB next month, and on the show floor.


July 1, 2011

2011 SID Display Week

The Society for Information Display (SID) Display Week is the foremost exhibition of emerging display technologies. It was held at the Los Angeles Convention Center May 16-21, 2011.
Some Key Emerging Technology Trends at SID 2011:
  • Further development in glass-free 3D (auto-stereo)
  • Dual sided, super thin LCD
  • Transparent displays
Overall, Samsung and Toshiba continue to drive broad-based emerging display technologies for consumer applications.

May 1, 2011

NAB 2011, Sony Press Conference



At the Sony press conference, we were treated to a live broadcast in 3D HD of the Masters golf tournament. The producers/directors of these sporting events are clearly becoming skilled at harnessing the 3D technology to full effect. They had camera angles from right at green’s edge, providing an almost hyper-real depth of field from the golfer’s POV. It was quite impressive.

Sony’s biggest technology unveiling, and one of the most talked about of the show, is their F65 CineAlta 4K resolution professional video camera. Veteran Hollywood filmmaker Curtiss Clark (here with Alec Shapiro, SR VP Sony Pro Solutions) and other industry professionals are proclaiming that this camera shoots images SUPERIOR to film. That goal has been the Holy Grail of video image capture for decades. It looks like its been achieved once and for all.

NAB 2011 - James Cameron Keynote


James Cameron, alongside his longtime technology partner Vince Pace, gave the NAB keynote. This was rather unusual, as the NAB keynote is usually given by a studio/broadcast CEO, or a Washington mover such as the FCC Chairman. Cameron is evangelizing about helping content producers realize 3D’s full potential as a creative and powerful storytelling medium. He also announced the founding of his new company Cameron-Pace in Burbank, which is a technology and production services company, who’s “goal is to banish all the perceived and actual barriers to entry that are currently holding back producers, studios and networks from embracing their 3D future.”

Cameron gave a message during the CES 2009 Panasonic press conference in support of the launch of their new 3D home video systems. No one there could imagine that the film he was working on, Avatar, would turn out to be the record breaking blockbuster of all time. So little more than two years later, the entire industry is now paying close attention to what Cameron has to say and the moves he's making.

NAB 2011 Highlights


The National Association of Broadcasters (NAB), held yearly in Las Vegas, is anything but just about broadcasting. It is exhibits and conferences on everything to do with media … film, video, internet, you name it. I’ve attended almost every year since the mid-80’s, and have seen industry sea changes year after year.
As I peered out over the expanse of one of the several show floors, I recalled how time and again the industry players can quickly shift places. I remember at an NAB in the late 1980's seeing a tiny booth of an unknown startup company showing a Macintosh-based non-linear editing system. Within a few minutes of seeing what the system could do, I predicted that this was the future of video editing. Well, that company was Avid, and it did indeed portend the future of video post production. Avid has, for the past several years, had one of the largest exhibits at NAB, and this year was no exception. At the time in the late 1980’s when I first discovered Avid, Abekas was one of the big player at the show, with their leading edge disk-based digital compositing/effects systems. Well, some twenty years later they’re back to being an unknown, with a tiny booth and a single product offering. Funny how the industry-wide “gales of change” are constantly shifting the landscape.

NAB 2011 - Key Issues:

  • 3D production process (James Cameron Keynote).
  • Transmedia: ubiquity, interoperability of content across all platforms/devices.
  • "Content in the Cloud"
  • Auto Stereoscopic content and technology

April 1, 2011

How Concerned Should We Be about P2P File Sharing?

Presented at 2011 NAB "Content in the Cloud" conference
by Tom Mulally, Numagic Consulting

Despite the recording industry’s past success in beating-back early peer-to-peer (P2P) music file sharing site Napster, P2P file sharing continues to proliferate. But now the content being shared is feature films and television programs. Technology has matured to enable consumers to share large video files quickly and with minimal effort.
Enablers include:
• Increasing Internet bandwidth to the home
• Home PCs functioning as media servers
• Lower priced, higher capacity storage
• Free, easy to use torrent applications
• Technically proficient “seeders” of pirated content
• Improved video codecs

However there are additional factors that are perhaps more disconcerting. Younger consumers are increasingly ambivalent about respecting copyrights. Experts point to a growing disregard for intangible property by younger consumers. A “bits are free” mentality may now be the norm. Are younger consumers becoming acculturated to expecting paid content to be free?
According to a CBS News poll, nearly 70 percent of 18 to 29 year olds thought file sharing was acceptable in some circumstances and 58 percent of all Americans who followed the file sharing issue considered it acceptable in at least some circumstances.[1]
To study current P2P activity, torrents of the recently released feature film Limitless were documented. The first of over a dozen files of the film was posted within 48 hours of its opening day on Friday March 18, 2011. Though it is a “camera copy,” the image and sound quality of the 1.3GB file is acceptable for viewing on a desktop-sized display.
The graph below shows the amount of Limitless copies leeched (downloaded) between March 20 -30. The cumulative total of file shares by March 30 was: 223,375.



1. “Young Say File Sharing OK.” CBS News, Bootie Cosgrove-Mather, 2003-09-18

March 7, 2011

Key Trend: Exponential Growth of Technology

By Tom Mulally, March 7, 2011

The exponential growth of technology today is counter-intuitive to our human cognition. We are wired to think linearly. We intuit our world in equalized steps, progressing through daily challenges and solving problems in a linear time domain by moving in predictable steps from Point A, to Point B, and then to our ultimate destination at Point C. Large, complex projects typically progress linearly through the time constrained, well defined phases of Concept Design, Schematic Design, Production, and Implementation. This way of thinking and problem solving has worked for us throughout history.

However exponential technological growth breaks with this paradigm. The clock speed of change is accelerating faster than evolution has wired us to comprehend. We are accustomed to a simple 10-step process progressing in even measures of 1,2,3,4, eventually arriving at 10. However in an exponential growth curve, ten steps progress exponentially: 1,2,4,8,16,32,64, eventually arriving at 512 after ten steps. Extend this sequence out further, and by step thirty we are at over a billion.

This is the growth trajectory technology has followed since our earliest use of technology (sharpened stones) more than 1 million years ago. The rate of change was slow and even at first. We harnessed fire, improved our stone tools, then created and assembled deadly hunting devices 40,000 years ago. A sharp acceleration of change started with the introduction of the wheel, and especially writing approximately 5000 years ago. Technology has followed an increasingly sharp upward slope ever since.

In the past 500 years the curve entered the sharper upward slope. Mechanical computing devices were introduced approximately 150 years ago (Babbage’s Differential Engine design). A half-century later the U.S. census was processed by machines. Another fifty years to the first electronic computers. Since then Moore’s Law has been in effect with processing power doubling approximately every eighteen months. Moore's law follows a more linear trajectory; however within the context of over a million years of technological change it's a point on the exponential curve. What’s most exciting (or disturbing depending on your perspective) is that we are just now entering the part of the exponential growth curve that is steep and accelerated.

Increasingly accelerated exponential growth is further evident in how, in little more than a decade the world wide web has proliferated. Or how in a matter of few years since its introduction, web-based social networking has hundreds of millions of users. Or how in a matter of days a new iPhone app is being used productively by tens of thousands. This exponential growth and proliferation of technologies, services and knowledge contradicts our conventional, linear mode of thinking. It is imperative to get our heads around this paradigm shift and plan accordingly.

February 17, 2011

When AR Gets Serious

Tom Mulally, February 17, 2011

As technology continues it’s exponential trajectory of innovation, it is inevitable that machines will gradually integrate with our bodies. For example, inventor/futurist Ray Kurzweil predicts that in thirty-to-forty years we will have nanobots in our bloodstream. In regards to Augmented Reality (AR), it will first become practicable when it can provide hands-free augmentation to our vision. First generations of AR have existed for decades in HMDs (head mounted displays) for military and specialty applications. Meanwhile, attempts at consumer AR applications have been, for the most part, novelties. The surface has barely been scratched on the potential for ubiquitous, context-aware personal mobile technology.

Lightweight flexible LCD glasses (not goggles) will provide the visual interface for AR. Next generation glasses, coupled with gesture and voice control will enable widespread implementation of AR-like technologies (though they won’t be called AR). "Situational Enhancement" will be achieved through real time inferencing. Data, images, and aural cues (a virtual "voice in your head" that YOU control) will provide ongoing assistance in communicating and performing tasks.

The glasses will also provide a convenient platform for our mobile device hardware, when it is miniaturized to a form factor of the glasses frame. We’ll be wearing them comfortably all the time, until systems can eventually be integrated into our bodies. Since the glasses are frequently exposed to light, future generations of light-gathering technology can provide the power, augmented by bio-electric charges from our bodies when in darkened spaces. The lenses will automatically respond and resolve to external conditions on a pixel-by-pixel basis, while the fully context aware technology in the frames continually gathers visual, audio, and position/environmental data and processes/integrates it via continuous web connectivity. The overlaid imagery will need to be served at millisecond display/refresh rates. These systems will become as comfortable, and necessary as corrective lenses and hearing aids currently are to the visually and hearing impaired.

November 29, 2010

Tim Berners-Lee has published what could be another of his prophetic essays: "Long Live the Web: A Call for Continued Open Standards and Neutrality."

http://www.scientificamerican.com/article.cfm?id=long-live-the-web&page=4

November 18, 2010

Key Emerging Technology Trends for Media Professionals

by Tom Mulally

Introduction

The ever-expanding digital infrastructure and the clockspeed of technological change are continually transforming the way media professionals design, develop, produce and distribute content. How does one keep up? What technologies should media professionals be focused on? Which are most likely to gain traction and impact how you work? Are you prepared to deal with the relentless “gales of change” that are upon us? This paper examines six key emerging trends that affect the way that you, your colleagues, and the general public work, play, and in some cases, think.

The emerging trends we will examine are:

· The Semantic Web and related technologies

· Managing Unstructured Content

· Leveraging the Cloud

· Context aware devices and Augmented Reality

· Social Networking/Social Media and related

· Open Ended Learning and Knowledge Transfer

This paper applies to anyone who has an interest in media design, production and delivery. It is especially relevant to professional designers, producers, editors, writers, web developers, programmers, or most anyone who, in the course of doing their work, uses digital images and sound.

A number of applications and processes profiled in this paper are commercially available, or in various stages of research and development. The objective is to provide a look at what’s on the horizon. This includes the author’s proposal later in this paper for a media production workflow that consolidates and leverages these key technologies.

Emerging Trend #1: The Semantic Web & Related Technologies

If you had to pick one technological trend that will have the greatest impact on all areas of a media professional’s workflow, it’s the Semantic Web and related technologies. Ignore semantic technologies, and you’re ignoring the leading edge. “The semantic web leads to possibilities straight from science fiction” (Siegel 139).

The term “Semantic Web” was coined by World Wide Web inventor Tim Berners-Lee. In a 2001 Scientific American article (Hendler, Berners-Lee and Lassila) the authors stated “The Semantic Web... will have uses we haven't dreamed of. It will break out of the virtual realm and extend into our physical world.”

In computer jargon, the semantic web is a group of methods and technologies that allow machines to understand the meaning - or "semantics" - of information on the World Wide Web.1 Semantic web technologies often act behind the scenes, utilizing ontologies2, taxonomies3, metadata4 and special controlled vocabularies5 to create a rich, contextual user experience.

“This technology makes all your business data look like a high-powered database — regardless of whether that data is a document on an employee’s hard drive, an existing database, or a repository of many documents in any format — without having to centralize any of that original data into one place” (Pollock 52).

“Most large enterprise software vendors, and many small ones, have already begun to adopt Semantic Web technologies and embed them into their mainstream products. In fact, leading enterprise software vendors such as HP, IBM, Microsoft, Oracle, SAP, and SoftwareAG all currently provide applications and tools that support Semantic Web specifications” (63).

Google CEO Eric Schmidt refers to the semantic web as “autonomous search,” and calls it “the next great stage of search” (Andrews, Voice). Google’s recent acquisitions of MetaWeb and Freebase confirm their commitment to developing semantic search. It will impact every field, from retail distribution to electronic health records management. It is a ubiquitous thread running through all of the emerging trends, and all facets of the digital media workflow.

TripIt (www.trip.com) provides an example of how the Semantic Web is developing. It’s an application that aggregates your airline travel, rentals, hotel information, and travel data from any website you might have booked it from. It then consolidates and organizes it into an itinerary, and syncs it with your calendar, contacts, and other productivity apps. It also has a suite of context aware smartphone tools (Emerging Trend #4) for the user to keep organized and in contact with others throughout the trip.



















Figure 1: “TripIt mobile app, and Semantic Web application screen shot. http://www.tripit.com/press/feature-screenshots/

TripIt delivers the kind of experience we can expect to see more of as semantic technology proliferates. And it applies directly to the media professional’s workflow too. TripIt’s semantic-based aggregation process mirrors that of a typical media production: it too is a process that often must aggregate volumes of source material, in all sorts of different file formats, from diverse content providers, sometimes from all over the world, and often in multiple languages. These same semantic technologies that are making TripIt successful, will soon be applied to the media production process.

“The semantic web reveals relationships between concepts, people, and events that are embedded in the wealth of content on the web but not always easy to see using other means” (Johnson et al. 7). An example of how these semantic relationships can work is Apture (www.apture.com), a free semantic application enabling readers to get rich content without leaving their current web page. Figure 2 shows how, when you highlight text on an Apture enabled page, linked content pops-up giving you additional layers of contextual data... text, graphics, and videos.

Figure 2: Highlighting text on an Apture enabled web page. www.apture.com

Apture is but one example of how media professionals will be able to leverage the capabilities of semantic search, to add deeper context to content, and provide new ways to present a richer media experience.

ACTIVE Knowledge Powered Enterprise

What do enterprise-level semantic technologies look like? You can get a look and feel at a semantic initiative by the European Union’s “ACTIVE Knowledge-Powered Enterprise” program (http://www.active-project.eu/).

ACTIVE is a consortium of twelve partner organizations from seven different European countries, co-ordinated by British Telecommunications. The intent of the consortium is to increase productivity of knowledge workers using semantic tools to “convert the ‘hidden intelligence’ of enterprises (Emerging Trend #6) —into transferable, interoperable and actionable knowledge to support collaboration and enable problem solving” (ACTIVE).

According to the ACTIVE web site, (http://www.active-project.eu/about-active.html) they have three integrated research themes:

· Easier sharing of information through a combination of formal techniques based on ontologies and informal techniques based on user tags6—so called folksonomies7.

· Sharing and reusing informal knowledge processes—by learning those knowledge processes from the user’s behavior.

· Understanding the user’s context--- so as to tailor the information presented to the user to fit the current task.

The ACTIVE “knowledge workspace” is designed to run in the background behind common applications, stays with you as you switch tasks, and is “working even when you’re not.” As an example, modules were developed for specific tasks, such as context-based knowledge management tools for Accenture’s proposal development processes, and the “ACTIVated Design Framework” for Cadence (Accenture and Cadence are among several corporate sponsors). In the case for Accenture, what these modules do is increase the efficiency of the normally time and labor-intensive process of creating proposals. ACTIVE improves collaboration while allowing proposal managers to allocate, track, and manage the work of development teams (Djordjevic, and others).

Similarly, specific modules and toolsets could be developed for the content production process. The way these might work is described later in the section “Concept for a Next Generation Content Production Process,” and shown in Figure 10.

The ACTIVE application is available for non-commercial research and demonstrations here: http://www.active-project.eu/publications/knowledge-workspace-software.html

Semantic Wikis

Another example of a semantic application is a Semantic Wiki8. Traditional Wikis, such as Wikipedia have proven effective as collaboration tools in workgroup environments. Media production professionals regularly use wikis during projects for basic knowledge management tasks. However a semantic wiki goes further, using knowledge modeling and the ability to capture or identify information and relationships about the data within pages in ways that can then be queried or exported (Kamel Boulos).

Semantic MediaWiki9 (SMW) is an extension of MediaWiki, which is the wiki application that powers Wikipedia. SMW is currently in active use in hundreds of sites, in many languages around the world, including Fortune 500 companies.

Now developers have combined semantic wikis with social media applications. KiWi (Knowledge in a Wiki) another project in the EU 7th Framework Programme, is an open source development platform for building semantic social media applications. They have developed a “web-based environment (the “KiWi system”) that provides support for knowledge sharing, knowledge creation, and coordination in software and project knowledge management” (KiWi). Documentation, video demos, and the KiWi applications are available at the project site: http://kiwi-community.eu

Emerging Trend #2: Managing Unstructured Content

Viewing and documenting hours of video content is often mind-numbing drudgery that can take hours, sometimes days of valuable production and archiving time. One of the goals for applying semantic technologies and analytics to video production is the ability to make sense of this unstructured content... video, audio, images... content that often does not come with readable text or metadata already attached to its essence. Automatic tagging and creation of metadata for such content would be especially helpful to archivists, news editors, as well as consumers who have many hours of video content, but little idea of what’s there. Foreign language audio further complicates the management of unstructured video content. How can we possibly manage and make sense of this growing flood of video content?

IBM has addressed this head-on, and is on the forefront in developing systems that can “automatically index, classify and search large collections of digital images and videos” (IBM). The IBM Multimedia Analysis and Retrieval System (IMARS)10 works by applying computer-based algorithms that analyze visual features of the images and videos, and subsequently allows them to be automatically organized and searched based on their visual content.

In addition to search and browse features, IMARS also:

· Automatically identifies, and optionally removes, exact duplicates from large collections of images and videos

· Automatically identifies near-duplicates

· Automatically clusters images into groups of similar images based on visual content

· Automatically classifies images and videos as belonging or not to a pre-defined set (taxonomy) of semantic categories (such as ‘Landmark’, ‘Infant’, etc.)

· Performs content-based retrieval to search for similar images based on one or more query images

· Tags images to create user defined categories within the collection

· Performs text based and metadata based searches (IBM site).

IBM IMARS is but one of several initiatives in their larger Unstructured Information Management Architecture (UMIA) program.

IBM TALES (Translingual Automatic Language Exploitation System) is a UIMA-based system which performs multimedia mining and translation of broadcast news and news Web sites. For broadcast video news, TALES performs video capture, keyframe extraction, automatic speech-to-text conversion, machine translation of the foreign text to English, and information extraction. Figure 3 below shows how English speakers can monitor the translated news in near real time, or place English language queries over the stored foreign language content, and get results, both video segments as well as web pages from any of the supported languages all translated into English into a single search result page. TALES has been deployed for several IBM customers, using it for monitoring Arabic, Chinese, and English broadcast news sources.

Figure 3: TALES performs automatic multimedia mining and translation of broadcast news and news Web sites

Korean researchers (Jung and others, 2007) are developing applications that quickly analyze long video files to automatically create short “abstracts” (Figure 4). Their goal is to create a comprehensive, several second “trailer” of a long video clip, without having to wade through the entire video in real time. Figure 4 below shows their graphical user interface, which has a familiar looking layout similar to professional editing applications

Figure 4: Screen shot of “Abstraction Framework for Story-Oriented Video” Korea Advanced Institute of Science and Technology and Korean Broadcasting. http://nclab.kaist.ac.kr/papers/Journal/narrativebasedabstrationframework.pdf

The Salzburg Media Lab has been working for several years to develop media platforms that provide “meaning-based management” (Burger) of digital audio-visual archives, especially unstructured video content. They call this next generation of video search a “semantic turn in rich media analysis,” and have developed systems that are hybrids for doing both semi-automatic and automatic “semantic enrichment” of content

One such system, called the “Smart Content Factory” is designed to use automatic feature extraction, including speech-to-text transcription combined with ontologies for semantic search within their domain. Their second system called LIVE, “combines methods of both automatic and semi-automatic detection, extraction and annotation of content with a knowledge-base under the control of a semantic based media framework.” The framework “propagates knowledge and contextual information to a recommender system which thus to some degree becomes aware of the meaning of the media” (Burger). Salzburg Research incorporates third party systems such as Virage Video Logger and Smart Encoding.


Emerging Trend #3: Leveraging The Cloud

Once you get past the frequent hype and misinformation, you’ll find that cloud computing is in fact a long running trend that “radically simplifies how you deploy, maintain, and access software, platforms, and infrastructure” (LiveOps).

“Basically, cloud computing means obtaining computing resources—processing, storage, messaging, databases and so on—from anyplace outside your own four walls, and paying for only what you use” (Fitzgerald, 2008). Media professionals stand to benefit on all levels, because everything you need is available through the Internet as a service, from concept development through final distribution of content. The cloud, especially when combined with the proliferation of 4G and other high-speed wireless services, is a key enabler for the digital infrastructure.

The “cloud” concept is not new. Datacenters, remote storage and remote computing have existed for years. What is new is “the way high-speed Internet access and almost limitless supplies of storage and processing power can now be pulled together” (Fitzgerald). A Pew Research study predicts that over the next ten years, most people will use cloud apps daily, sharing and accessing information over networks (Pew). Gartner predicts that in 2013 the cloud services market will reach $150 billion (Gartner).

Cloud computing is the convergence of three major trends. Virtualization, where applications are separated from infrastructure. Utility computing, where server capacity is accessed across a grid as a variably priced shared service, and Software as a Service (SaaS) where applications are available on demand on a subscription basis” (rPath).

Figure 6: “Cloud” resources for a typical video post-production workgroup configuration (Mulally)

The cloud enables media professionals to access and share data and applications from virtually anywhere in the world (given the limitations of network speed). As shown in Figure 6 above, a single content creator has connectivity to diverse resources that once required huge investments, or weren’t available at all.

Cloud services are breaking down the last barriers-to-entry for anyone to achieve their potential as a professional content creator or distributor. A small, independent production group can rent cloud storage, applications, and rendering resources on an as-needed, pay-as-you-go basis. This is a sea change from the traditional model of the production business in a bricks-and-mortar facility, heavily leveraged from technology purchases and leases, with full time engineers and I.T. specialists to maintain the operations.

Animoto

One case of a media-related company that has successfully leveraged the cloud is Animoto (www.animoto.com). As profiled in a May 28, 2008 New York Times article about Cloud Computing by Michael Fitzgerald, Animoto has created what co-founder Brad Jefferson calls “an on-demand video platform.” Since 2006 Animoto has provided a service whereby you can turn your event photos and videos into user-generated, automated-yet-customizable videos. The Animoto interface lets consumers, businesses and educational users upload photos and videos, then select from royalty-free music provided through a partnership with music licensing and marketing firm Rumblefish.



Figure 7: Animoto iPhone application screen shots. http://itunes.apple.com/us/app/animoto-videos/

Animoto also has an iPhone app (Figure 7) for users to produce professional looking videos using photos on their devices. In 2008, “Facebook users went into a small frenzy over the application, and Animoto had nearly 750,000 people sign up in three days. To satisfy that leap in demand with servers, the company would have needed to multiply its server capacity nearly 100-fold. Instead, they added capacity using Amazon’s cloud, at a cost of 10 cents a server per hours, with additional expenses for bandwidth and storage. When demand slowed, Animoto automatically lowered its server use, and its bill” (Fitzgerald).

Citizen Global

CitizenGlobal (http://www.citizenglobal.com/) uses the cloud and crowd sourcing11 to provide a global exchange for video and film professionals to directly connect their content with the people and places that require it most. They provide a central hub where global creators can upload full-resolution broadcast-quality footage (not the heavily compressed, consumer grade video on YouTube).



Figure 8: CitizenGlobal website http://www.citizenglobal.com/

Oprah Winfrey Show’s “No Phone Zone” awareness campaign to reduce distracted driving has used Citizen Global. The show leveraged the Citizen Global online studio suite, putting tools directly into the hands of creators to edit “No Phone Zone” Public Service Announcements. Audience members were able to upload their own footage, then create their own custom PSA by mixing it with media assets from The Oprah Winfrey Show's media library using CitizenGlobal's in-browser online editor. By leveraging cloud technologies, CitizenGlobal has created a new means for doing video post-production and content distribution.

The cloud-based business model is still in its early adopter stage, with kinks being worked out. And there are risks that need to be mitigated, especially related to security. But large cloud services innovators such as Salesforce.com and Rackspace have proven that the concept and business model is viable. And many of the large I.T. service providers now offer cloud services... IBM, EMC, SAP and Oracle among them.

Emerging Trend #4: Context Aware Technologies, and Augmented Reality

Many consumers have in their pockets and purses context aware12 smartphones and tablets that, among many things, can transmit where they are and to some degree, what they are doing. The multiple-sensory input capabilities (video, audio, GPS, accelerometer, ambient light sensor, proximity sensor, 3-axis gyro, and Internet connectivity) combine to provide a powerful feature set. The surface has barely been scratched on the possibilities these powerful context-aware smart devices offer, especially when combined with the other trends described here.

“According to a recent Gartner Report, mobiles will be the most common way for people to access the Internet by 2013” (Johnson et al. 9). And when mobile high-speed broadband data rates begin to proliferate in the coming months, smart devices will then have the power to perform magic.

Media professionals need to embrace the potential of high-speed context-aware smart devices and design the future. Google, Facebook, Android/iPhone/iPad app developers, and numerous tech-savvy marketers are frenetically developing applications and channels to exploit them. And context-aware devices also provide niche users such as museums (below), retailers, and various special venues with a new means to engage the public.

Google

Google is among the leading edge developers of context aware technologies, pursuing initiatives outside its core business such as automated self-driving cars, which have been road tested throughout California (Tuttle, 2010). Meanwhile they are aggressively pursuing a convergence of context aware devices and semantic search with their constantly evolving Google search platform integrated with phones and smart devices running the Android operating system. Google CEO Eric Schmidt announced at a Sept. 2010 conference that current technological advancements are nearing the realm of science fiction, that we are “about to see a new age of ‘augmented humanity,’ when computers will make it possible for us to do what we really want to do” (Andrews). This at a conference where Google demonstrated updates to the Android cell phone platform including real-time voice control of devices and real-time language translation.

The Context Aware Museum Guide

One example of these technologies in use is by researchers at Carnegie Mellon University, who developed forward thinking technology that merged semantic search, context awareness, and inferencing engines. Their project “Semantic Web Technologies for Context-Aware Museum Tour Guide Applications”13 (Chou, et.al.) is built around a Semantic Web framework that also utilizes key technologies described in this paper. The application, designed to run on a smartphone, first gathers the visitor’s interests and preferences, including privacy rules, and then utilizes OWL (Web Ontology Language) domain ontologies created specifically for this museum’s experience.



Figure 9: CMU’s “Context Aware Museum Tour Guide Application”
Source: http://mcom.cs.cmu.edu/mycampus-publications/

Figure 9 shows how real-time web and intranet semantic searches supplement server-based contextual information as the visitor moves through the exhibit space with the location aware features of the smartphone. What the technology does is give the museum experience designers a new medium in which to engage the visitor.

“Location-based services offer a number of interesting possibilities to engage in a deeper level of interactivity with visitors” (Johnson et al. 21). Each visit to the museum can now be a unique experience for each visitor. For example as a visitor approaches an exhibit about climate change on Greenland, her museum guide application knows where she is, and synchronizes to content playing on an LCD display in the exhibit (similar to the Nielsen Media Sync below). The smart device acts as an interactive companion throughout the museum experience, pulling content such as real time environmental data from the Internet. Text can be translated in real time. Augmented reality real-time data overlays can be updated with in-field analytical data, such as shown in the Theodolite, Figure 11 below.

Figure 10: Nielsen’s Media Sync Platform http://paidcontent.org/article/419-abc-and-nielsen-partner-on-ipad-app-that-syncs-tv-and-mobile-viewing/

Figure 10 shows Nielsen’s recently tested “Media Sync,” an iPad location-aware interactive television application to create a new viewing experience for viewers of the since discontinued ABC drama “My Generation.” What’s innovative about this iPad application is that it makes use of another capability of smart devices: using audio captured by the device’s microphone to automatically detect location information, and then synchronize with what’s nearby, in this case a specific program on the user’s television. Both audio and video capture combined with GPS location, accelerometer, proximity sensor, ambient light sensor and Internet connectivity offer yet-to-be-exploited possibilities for rich, context-aware experiences on mobile smart devices.

Figure 11: “Theodolite” AR application for iPhone. http://mashable.com/2009/12/05/augmented-reality-iphone/

Augmented Reality

Augmented Reality (AR), “the concept of blending (augmenting) data—information, rich media, and even live action—with what we see in the real world” (Johnson et al. 16), has been around for years, used in science visualization systems, military “heads up” displays, and other specialized applications. Mobile smart device’s capabilities now offer a new platform for AR applications, with many developers, especially in the U.S. hurriedly competing to create all sorts of novel AR apps primarily for commercial use. Figure 11 above shows an example of a practical application of augmented realty, creating a Theodolite (a device used to measure horizontal and vertical angles) for a mobile phone.

A European project funded under the Sixth Framework Programme for Information Society Technologies has created some of the more interesting AR apps to date. Figures 12 & 13 show their “superimposed environments” and “reality filtering” mobile applications where historical photos and images of classical artwork can be superimposed over the real environment.

Figure 12: iTacitus Project, Winchester Cathedral, http://www.itacitus.org/news/2


Figure 13: iTactitus Project, Palazzo Diana, Turin, Italy. http://www.itacitus.org/news/2

As more practical and task-specific applications are developed, both consumers and professionals will be able to apply AR practicably on their smart phones, tablets, and within the media production process. For consumers and professionals alike, the surface has barely been scratched on how AR can be deployed on smart devices when combined with the location aware capabilities described above.

Emerging Trend #5: Social Networking/Social Media and Related Technologies

Predicting that social networking and social media will have an increasing impact on media professionals is about as obvious as predicting growth in the alternative, clean-energy sector. It’s all around us. What is more challenging is figuring out what “killer apps” will be spawned from and for these new paradigms. One growing area to watch is the opportunity for “learning networks” to proliferate within online communities.

Media professionals have historically been adept at conventional social networking, especially in geographic “spikes”14 like Los Angeles and New York. With much work now being performed by far flung teams who may never meet in person, a new sort of social workplace mindset develops, one that Los Angeles based producer and former Disney executive Larry Gertz calls “intimate autonomy.” The challenge now is for media professionals to embrace the new semantic wikis and other emerging toolkits and invent the future of creative and technical collaboration.

Virtual networks provide for both physical and technological “creation platforms” (Brown, Hagel and Davison 144) to form in all sorts of social networking contexts. Formation and active participation in communities of practice (CoPs), communities of interest, and creation spaces is becoming increasingly important. Larger enterprises have been doing it successfully for years. Practical, efficient tools for anyone to create and maintain professional online communities are now offered by Google and others. Semantic wikis and related technologies as described in Trend #1 will increasingly be used to enhance social networks, especially among professionals.

Emerging Trend #6: “Open Ended Learning” and Knowledge Transfer

How much specialized knowledge have you acquired during the course of your education and career? What if much of this hard-gained knowledge is no longer unique to you, and is now available to anyone, including your co-workers, subordinates, boss, competitors, and customers? How much will that reduce your value as a knowledge worker? In fact, much of our “explicit” knowledge is available to anyone with a web crawler and good search skills. This is compounded by further implementation of semantic search technologies described above.

As John Seely Brown and his co-authors write in their recent book “The Power of Pull,” with so much knowledge just a click away from anyone, professionals can no longer rely solely on the “stores” of knowledge they have built over the years. We all must practice a continuous, virtuous cycle of open-ended learning15, as the sources of economic value move from “stocks” of knowledge to “flows” of new knowledge (Brown, 2010).

The good news however is that there is knowledge that is not so readily flowing for us to benefit from. As Hagel, Brown and Davison point out in “The Power of Pull,” the most valuable knowledge is in very short supply and extremely hard to access. It is highly distributed and may be embedded in the heads of people who are not well known and who are difficult to identify. As this tacit knowledge becomes more valuable than explicit knowledge, we need to transform our institutions and our work processes with tools that enable us to collaborate and learn more efficiently.

Each technology and process in this paper helps support open-ended learning. Media professionals will need to employ this mindset in all areas of their work in order to maintain a competitive edge. Experts agree that creating more open knowledge-transfer is increasingly critical for all organizations on all levels.

From “Technology- driven” to “Technique-driven”

Most small/medium design and production organizations are contending with the endless cycle of changing technology that often results in continual, painful process disruptions. To succeed, they must move from a technology driven business model, to one that is “technique” driven by each individual; from the knowledge stores and silos that accompany an I.T. driven techno-centric infrastructure, to the knowledge flows and creation spaces that enhance tacit knowledge transfer. These will complement and support the focus on technique over technology. Herein lies the foundation for open-ended learning.

Concept for a Next Generation Content Production Process

If we could apply these trends to a media production process, what would it look like? First, the cloud would give us instant connectivity to whatever we need, when we need it, with transactions handled automatically. Then semantic search and related technologies would be combined to provide decision support, analytics, reasoning... essentially an omniscient intelligent-assistant running in the background. And the multi-sensor capabilities of mobile smart devices (iPhone, Android, iPad, etc...) connected via high speed wireless creates a sort of positive feedback loop, keeping all stakeholders, including the end user/client as active participants throughout the process.

If we build a composite workflow from this “wish list,” it could look like this. Figure 14 is the author’s concept design for an “Intelligent Media Production” framework. A key enabling component of this framework is an “ambient intelligent assistant” system. The underlying goal of this concept is to leverage semantic and related technologies for what they do best, which is inferring meaningful conclusions, and automating certain knowledge-intensive tasks such as classification, monitoring, prediction and planning. This is a conceptual framework, but is a composite of disparate technologies that are either in development or implemented in other applications.


Figure 14: Concept for a Semantic based “Intelligent Media Production” framework (Mulally)

The system, as shown above in Figure 14, consists of semantic search aggregated with powerful decision support, inferencing, reasoning and analytics engines. The “personal assistant” runs in the background while the editors/designers work, providing assistance ranging from pulling relevant assets from “cloud” based resource centers, to consolidating/organizing disparate content to be encoded.

This system leverages the semantic web, with the support analyst providing ongoing asset management, and maintenance of a domain ontology, taxonomies, metadata and tagging. Much of the content in this workflow starts out as unstructured visual media, so the support analyst plays a key role ensuring that content metadata is optimized to support the workflow.

Conclusion

Let’s revisit our questions from the introduction. We asked:

With the ever-expanding digital infrastructure and the clockspeed of technological change ... “how does one keep up?” Answer: Trend #6, Open Ended Learning, as well as leveraging the various enabling technologies as tools for learning. By practicing a continuous, virtuous cycle of ongoing learning, you will be able to keep abreast of the technology, trends, and techniques necessary to keep your stores of knowledge fresh and on the leading edge.

What technologies should media professionals be focused on? A safe bet is to start with Trend #1, use of the Semantic web, and, depending on your area of expertise, work your way down the list.

Which are most likely to gain traction and impact how you work? Certainly all will impact your work to some degree. But Trends #1 the Semantic Web, & #3 the Cloud, though less visible and more “background” technologies, are nevertheless becoming ubiquitous across all disciplines.

Are you prepared to deal with the relentless “gales of change” that are upon us? Yes! Now that you are aware of these trends and technologies, you can forge your own trail in the digital frontier and stay on the leading edge.

Notes

1. W3C Semantic Web Frequently Asked Questions. Accessed 11/2/10 at: http://www.w3.org/2001/sw/SW-FAQ#othersw

2. An ontology, as defined by Kimiz Dalkir in Knowledge Management in Theory and Practice, is “an explicit formal specification of how to represent the objects, concepts, and other entities that are assumed to exist in some area of interest and the relationships that hold among them; a formal, explicit specification of a shared conceptualization.” Tim Berners-Lee, James Hendler and Ora Lassila in their article The Semantic Web, (Scientific American, May 2001), define ontologies as “collections of statements written in a language such as RDF that define the relations between concepts and specify logical rules for reasoning about them. Computers will ‘understand’ the meaning of semantic data on a Web page by following links to specified ontologies.”

3. A taxonomy is a hierarchical structure for a body of knowledge. This provides a framework for how things are grouped, and how things relate to each other (Dalkir 342).

4. Metadata is often defined simply as “data about data.” To provide more nuance to this definition, Peter Morville and Louis Rosenfeld in Information Architecture for the World Wide Web describe the role metadata plays: “Metadata tags are used to describe documents, pages, images software, video and audio files, and other common objects for the purposes of improved navigation and retrieval.”

5. Controlled vocabularies are predetermined vocabularies of preferred terms that describe a specific domain (e.g. auto racing or orthopedic surgery) (Morville and Rosenfeld 52).

6. A tag is a non-hierarchical keyword or term assigned to a piece of information (such as an Internet bookmark, digital image, or computer file). This kind of metadata helps describe an item and allows it to be found again by browsing or searching. Accessed 11/11/10 at: http://en.wikipedia.org/wiki/Tag_(metadata)

7. Folksonomies are described in Wikipedia as a “system of classification derived from the practice and method of collaboratively creating and managing tags to annotate and catagorize content.” They are also referred to as “mob indexing.” Folksonomies are used in social tagging and social indexing at sites such as Digg, de.licio.us, and Flickr.

8. A Wiki is defined in Wikipedia as “a website that allows the easy creation and editing of any number of interlinked web pages via a web browser using a simplified markup language or a WYSIWYG text editor.” "Wiki" is a Hawaiian word for "fast." Accessed 11/11/10 at: http://en.wikipedia.org/wiki/Wiki#cite_note-0

9. The Semantic MediaWiki site, where the current version of SMW can be downloaded. http://semantic-mediawiki.org/wiki/Semantic_MediaWiki

10. Information about IBM’s TALES, IMARS, UMIA, is at the following IBM Research websites:
TALES (Translingual Automatic Language Exploitation System) at:
http://domino.research.ibm.com/comm/research_projects.nsf/pages/tales.index.html?Open&printable
IMARS (IBM Multimedia Analysis and Retrieval System) at:
http://www.alphaworks.ibm.com/tech/imars
UIMA (Unstructured Information Management Architecture) at:
http://domino.research.ibm.com/comm/research_projects.nsf/pages/tales.index.html

11. Crowdsourcing is the act of outsourcing tasks, traditionally performed by an employee or contractor, to a large group of people or community (a crowd), through an open call. The term has become popular with businesses, authors, and journalists as shorthand for the trend of leveraging the mass collaboration enabled by Web 2.0 technologies to achieve business goals. Wikipedia, accessed at http://en.wikipedia.org/wiki/Crowdsourcing

12. Context aware smart phones and other mobile devices refer to a feature set that enables the device to sense the user’s location, orientation, and other information (sound and imagery of the user’s environment), to thereby infer the “context” of where they are and what they are doing there. iPhones, iPads and Android devices have accelerometers (provides data on how the device is oriented in 3D space, which, combined with its GPS, enables it to also act as a compass) audio capture (to analyze audio and thereby synchronize with external events) and video capture (to overlay information, as in augmented reality applications), (Mulally).

13. In 2005, researchers from the Carnegie Mellon University’s “Context Aware Museum Guide” systems team described in Emerging Trend #4 (Norman M. Sadeh and Fabien L. Gandon) joined with Oh Byung Kwon to propose another Semantic Web/Context Aware framework called “Ambient Intelligence: The MyCampus Experience” http://users.ece.gatech.edu/~dblough/8823/MyCampus.pdf This framework builds upon the “Museum” work, with a better defined architecture that is more relevant to the “Ambient Intelligent Assistant” system described in this paper.

14. Geographic Spikes are described by Brown, Hagel, and Davison in “The Power of Pull” as the urban centers that attract specialized talent, which in turn expand and rapidly attract even more talent. This is especially happening in rapidly developing economies such as China and India, where cities like Bangalore, Shenzhen and Chonqing are attracting more and more talented people.

15. The author’s concept of open-ended learning draws on the description of the “free agent learner” by William Rothwell in The Workplace Learner (p.39), and a conference call with Rothwell 7/28/2009.

16. “The Intelligent Media Production framework is the author’s concept based on personal experience, currently available technologies, and reference material as cited in this paper.

Works Cited

ACTIVE Annual Report, downloaded 10/31/10 from http://www.active-project.eu/fileadmin/public_documents/ACTIVE_Annual_Report_2009.pdf

Andrews, Robert, “Google’s Schmidt: Autonomous, Fast Search is ‘Our New Definition’” paid Content:UK... covering UK’s Digital Media Economy. Web. 7 Sep. 2010. http://paidcontent.co.uk/article/419-googles-schmidt-autonomous-fast-search-is-our-new-definition/

---. “Google’s Sci-Fi Now: Voice-Control TV, Eternal Memory, an End to Loneliness,” paid Content:UK... covering UK’s Digital Media Economy. Web. 7 Sep. 2010. http://paidcontent.co.uk/article/419-googles-sci-fi-now-voice-control-tv-eternal-memory-an-end-to-loneliness/

Beaudouin, Remi, & Barouxis, Howard, Live/File-based workflows convergence for multi-screen delivery strategy, ATEME, Proceedings of the 2010 Fall Society of Motion Picture and Televisions Engineers (SMPTE), October 27, 2010.

Brown, John Seely, Hagle, John, and Davidson, Lang, The Power of Pull, 2010 by Deloitte Development LLC, published by Basic Books.

Burger, Tobias, and Guntner, Georg, Towards a Semantic Turn in Rich-Media Analysis. Salzburg Research Forschungsgesellschaft. From the proceedings of the Conference on Electronic Publishing, Vienna, Austria, June 2007.

Chou, Shih-Chun, Gandon, Fabien L., Hsieh, Wen-Tai, and Sadeh, Norman M., Semantic Web Technologies for Context-Aware Museum Tour Guide Applications. In Proceedings of the 2005 International Workshop on Web and Mobile Information Systems. Advanced e-Commerce Institute for Information Industry Taipei, Taiwan, ROC, and School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 – USA.

Dalkir, Kimiz, “Knowledge Management In Theory and Practice.” 2005 Elsevier Inc.

Djordjevic, Divna, Fullarton, Duncan, and Ghani, Rayid, Process-Centric Enterprise Workspace Based on Semantic Wiki, Accenture Technology Labs, France, USA;2Karlsruhe Institute of Technology, Karlsruhe, Germany

EyeForTravel, Social Media Strategies Travel 2008 Special. Published 11 Apr 2008. Accessed 11/14/10 at: http://www.eyefortravel.com/node/14016

Fidjeland, Mikael Kirkeby, Reitan, Bård K. Hansen, Bjørn Jervell , Halvorsen, Jonas, Langsæter, Tor and Hafnor, Hilde. Semantic wiki – collaboration, semantics & semi-structured knowledge. Norwegian Defense Research Establishment (FFI) February 26th 2010.

Fitzgerald, Michael, “Cloud Computing: So You Don’t Have to Stand Still,” New York Times, published May 25, 2008.

Gandon, Fabien L., Sadeh, Norman M., and Kwon, Oh Byung, Ambient Intelligence: The MyCampus Experience. July 2005. School of Computer Science Carnegie Mellon University sadeh; gandon; obkwon}@cs.cmu.edu

Gartner Research, The What, Why and When of Cloud Computing, Accessed 11/1/2010 at: http://www.gartner.com/technology/research/cloud-computing/index.jsp

Googleblog: Deeper understanding with MetaWeb, 7/16/2010 10:38:00 AM. Accessed at http://googleblog.blogspot.com/2010/07/deeper-understanding-with-metaweb.html

Hagel, John & Seely Brown, John “Cloud computing, Storms on the horizon” Deloitte Center for the Edge, Working paper (pdf). Downloaded Sept 16, 2010 from: http://www.deloitte.com/assets/Dcom-UnitedStates/Local%20Assets/Documents/TMT_us_tmt/us_tmt_ce_CloudsStormsonHorizon_042010.pdf

Hagel, John, Edge Perspectives with John Hagel. Posted to http://www.edgeperspectives.com/ by John Hagel on June 14, 2010.

Hendler, James, and Lassila, Ora, and Lee, Tim-Berners, “The Semantic Web”, Scientific American, May 17, 2001.

IBM, Dispelling the vapor around cloud computing, IBM Thought Leadership Whitepaper, January 2010. Downloaded October 6, 2010 from: http://www.informationweek.com/whitepaper/ To learn more about cloud computing at IBM, please visit the following Web site: ibm.com/cloud.

Johnson, L., Witchey, H., Smith, R., Levine, A., and Haywood, K., (2010). The 2010 Horizon Report: Museum Edition. Austin, Texas: The New Media Consortium.

Jung, Byunghee, Song, Junehwa, and Lee, Yoonjoon, A Narrative-Based Abstraction Framework for Story-Oriented Video. Korea Advanced Institute of Science and Technology and Korean Broadcasting System. Published in the ACM Transactions on Multimedia Computing, Communications and Applications, Vol. 3, No. 2, Article 11, Publication date: May 2007. Download at: http://nclab.kaist.ac.kr/papers/Journal/narrativebasedabstrationframework.pdf

Kamel Boulos, Maged N., Semantic Wikis: A Comprehensible Introduction with Examples from the Health Sciences, Journal of Emerging Technologies in Web Intelligence, Vol. 1, No. 1, August 2009.

KiWi, “Knowledge in a Wiki,” accessed 11/1/10 at http://kiwi-community.eu/display/about/About

Landry, Susan, Key Trends Affecting Enterprise Applications. Gartner Webinar, presented 9/9/10. Accessed 9/29/10 from http://my.gartner.com/portal/server.pt?open=512&objID=202&mode=2&PageID=5553&resId=1417920&ref=Webinar-Calendar

LiveOps, Accessed 9/16/10 from: http://www.liveops.com/why-liveops/why-cloud-computing.html

Mell, Peter & Grance, Tim, The NIST Definition of Cloud Computing, Version 15, 10-7-09. Web. 21 Sep. 2010 http://csrc.nist.gov/groups/SNS/cloud-computing/

Metz, Cade, “Web 3.0”, PC Magazine, 2007. Web. 21 Sep. 2010. http://www.pcmag.com/article2/0,2817,2102860,00.asp

Morville, Peter, and Rosenfeld, Louis, “Information Architecture for the World Wide Web” Third Edition, 2007. O’Reilly Media, Inc.

Nielsen’s Media Sync Platform, accessed at: http://paidcontent.org/article/419-abc-and-nielsen-partner-on-ipad-app-that-syncs-tv-and-mobile-viewing/

Pew Internet and American Life Project, accessed 11/2/10 at http://pewinternet.org/topics/Cloud-Computing.aspx

Pollock, Jeffrey, Semantic Web for Dummies, Wiley Publishing, Inc., Indianapolis, Indiana.

rPath, Cloud Computing in Plain English, rPath Inc. Web. 6 Oct. 2010. http://www.rpath.com/corp/cloudinenglish

Siegel, David, “Pull. The Power of the Semantic Web to Transform Your Business,” 2009. Penguin Books, London.

Tuttle, Beecher, “Robotics -Google Testing Self-Driving Cars in California.” Robotics.TMCnet, October 10, 2010. Web. 10 Oct. 2010.

W3C (World Wide Web Consortium), Web. 18 Sep. 2010. http://www.w3.org/standards/semanticweb/inference

Williams, Jenny, Ideagarden Consulting, Web. 21 Sep. 2010. http://www.labnol.org/internet/web-3-concepts-explained/8908/