Monday, March 9, 2009
Monday, March 3, 2008
Which brings me to Wayne MacPhail, Social Media guru and owner of a cool looking laptop cover, who gave a talk on Web 2.0 and Social Networking. Wayne talked to us and then went to the back of the room and complained about us on Twitter. We love you too Wayne. Btw, we're not new media students, we're Interactive Multimedia Students, New Media is down the hall. Just because we can create Air Applications and Flash based Gesture Technology doesn't mean we Twitter on about it all day.
Back to the point, what is this Web 2.0 everyone is talking about? Glad you asked.
Web 2.0 is pretty much a buzz word term for what the internet has evolved into. It's a marketing phrase that talks about a group of websites that combine information together to create a more rich and well rounded internet.
Take search engines for example, years ago they relied on a primitive database of keywords. There was an honour system that said, if your website had certain keywords stored in the Meta Tags, it was indexed as such.
The problem with that: there is no honour on the internet. Which is why porn sites had every keyword under the sun listed in their meta tags. And we all remember the kid in high school that was searching for "Abe Lincoln" or "Acid Rain" and clicked into a porn site by accident and promptly got suspended. Not to mention that after hundred thousand websites on gardening, the keyword "Gardening" doesn't mean quite so much anymore.
But like most good things in life, search engines matured with age. They are almost at the level of an artificial intelligence at this point. For instance, little robots search through the page you are viewing, reading words like baby and pregnancy. In the past search engines would produce ads based on "Baby Names", "Baby Super Store" and "Baby Books". With the advent of GoogleAdSense robots have become smarter. So now when they read through your page and find keywords like baby and pregnancy, they also look for words that carry a negative connotation, like abortion. So that the ads for a happy new mother expecting a child are different for a woman that just lost a baby.
So it's the dawn of a new age: Search Engines don't use meta tags and keywords to determine website ranking anymore. Instead they use relevance and popularity to determine ranking. And how do you increase your relevance and popularity? By becoming one social networking!
By adding relevant content to social networking sites, SNS for short: both traditional and not so traditional, there is an opportunity to increase both popularity and relevance. Take blogger.com for instance. It was purchased by Google awhile back and is now a powerful tool for gaining higher rankings in search engines, specifically.....and you guessed, Google. Since Blogger.com is indexed every 20 minutes, blogs that are updated regularly, with relevant content shoot up in Google rankings super fast. Especially if your blog labels, tags and titles are a little out of the ordinary. If you're the 100 thousandth blog post titled, "Why I love gardening..." you're going to be buried in the pack.
Social Networking is also a big part of the Web 2.0. SNS encourage community and collaboration, and shared content creation. But as Wayne said, "You don't use a social network, you become a part of it."
And as you become apart of it you develop relationships with other users. It's a proven fact that relationships form better over stress, like the bonds that soldiers develop in times of war, addicts going through withdrawal and cancer patients surviving another day. The same is true of social relationships developed on the internet. As well it encourages engagement and evangelism.
For example, one of the administrators and database coders on TorrentSpy.com, who had worked laboriously on the site for years and was well respected and liked amongst his peers and the users of the site, found his dream car. The problem? He didn't have quite enough cash for it and if he didn't move fast he would lose it. Now his dream car wasn't flashy and new, nor a gas greedy monster Hummer but a classic automobile that he had dreamed of since he was a child. The community responded in a big way. This individual needed $15,000 to purchase the car, what he received was over $30,000. In the end he used $15,000 to buy the car, $5,000 or so to buy new rims and the rest to upgrade the website servers for faster service for TorrentSpy's millions of users. He asked of the community and gave back to it at the same time: social evangelism at its finest.
So what's the downside to all this social networking? Well it can be tedious and complicated for one thing. Going on Flickr to post your pictures, then off to Twitter to tell people you're posting pictures on Flickr, Jaiku to have a conversation about your Twitter post on your new Flickr images, all while you're shooting a video of yourself talking to someone about your Twitter post about your new photos on Flickr. Getting a little complex isn't it?
There's also the big issue of privacy. Are you ready to share everything with the world wide web? I know I'm not. I don't even like the idea of posting my thoughts for everyone to read, especially when they are archived like everything is on the internet. What if you have a bad day and rant and rave about your boss? Maybe your boss won't find it but what if he does? What if it costs you more than just your professional reputation but your job as well? And if I'm having to sensor most of what I'm posting on social networking sites, what is really the point? I can keep my mouth shut offline.
So in the end Social Networking is a cool idea and I love the fact that Geeks have found a way to have a social life without ever leaving the comfort of their computers. But maybe a more important kind of networking is connecting with people in the real world. Despite what we may think, human beings need physical contact with others. Social Networking is just another excuse for us to say, "I don't feel like going out tonight."
Links for Marks:
Googlzon: Epic 2014 - What would happen if social networking went too far?
Twhirl - a desktop client for twitter, based on the Adobe AIR platform
Capazoo - a social networking site that Wayne didn't mention
JSYInnovations - Marketing your website in the Web 2.0 world
Suprglu - How do you connect your social networks? Super glue...lots and lots of it.
Thursday, February 14, 2008
So on to the Go Train, where Dan, I and the rest of the students made a woman feel so awkward with our Go Train movie trivia that she quickly moved up to the next level. But that's okay, because soon we were in Toronto and on our way to the subway.
One tiny token later and we were on the subway, headed to the Royal Ontario Museum to meet the rest of our class for the field trip of a lifetime.
Apparently we were a little late as half of our class was already there, but our speaker Brain Porter didn't seem to mind too much.
Brian took us into the ROM Digital Gallery, which consists of a giant screen and several different touch screens. As we waited for the show to start, so to speak, Brook and I played with the touch screen in front of us. Through the touch screen interface there was a choice between Ancient Egypt and Canadian Heritage. Into Egypt there are several different artefacts to look at, which can be rotated and zoomed in on with a few touches of the screen. One artefact from that section that I found particularly interesting was a canopic jar of Isis.
In Egyptian mythology Isis is the goddess of fertility and motherhood. She is the daughter of Earth and Sky or Keb and Sky in Eygptian. She is also the sister and wife of Osiris, Egyptian god of the underworld. There is even a neat story of her searching all over Egypt for the body parts of her dismembered husband which she gathered and returned to life.
On the Canadian Heritage side there was a very cool looking pistol and a pair of children's shackles. According to the ROM the shackles are from Georgia, which made me wonder how they were part of Canadian Heritage...but we didn't really get to that part in the video.
But first, Brian Porter introduced himself as the Senior Director of New Media. He spoke a little bit about the improvements he has made at the ROM since he has been working there, such as digital photography, improvements to the website, 3D imaging of objects (as seen in the ROM Digital Gallery). Brian also mentioned that his job it to find new media funding for the ROM, Canada's Largest Museum.
He explained that the ROM is a charity and therefore gets it's funding from a variety of sources: government grants, donations, fundraisers etc. Brian also told us that Museums aren't pioneers, because they are charities, they only use proven technology that has been around for a while, that is reasonably priced. Later on Brian also mentioned how having a broad range of skills is essential in the job market.
Working for a charity myself, Performing Arts Burlington, I completely understand where he was coming from. And it made me wonder if that was the nature of charities as a whole. That everyone needs to be a jack of all trades, since there is never enough funding to go around. I mean, at PAB I'm an event planner, web designer, graphic designer, administrative assistant, membership invoicer, tech support....the list goes on.
After his speech, Brian started a film that the ROM staff had created based on Canadian Heritage and kids interacting with the touch screens. The film wasn't phenomenal, but descent quality based on limited funds. And as Brian put it, "If you were in 5th grade..." I finished his thought with, "...this would blow your mind!".
What blew Brook and my mind was seeing, Carlos Bustamante from YTV as the host/narrator of the movie.
Brian also showed us a touch screen available for the general museum public which scrolled through a list of people and companies that donated to the ROM and gave the history of the museum itself. It allowed users to easily find their name, since patrons want to see their name listed for sizable donations. The wall itself was very elegant and neat looking. Brian told us that he often finds people standing in front of the wall, interacting with it.
Perhaps I don't have enough faith in the average ROM visitor, but I can't see a lot of people touching the wall and interacting with it. No where does it say, "I'm an interactive wall, touch me!" I think that most users would walk by it, or perhaps stop and watch it scroll through, few would actually touch the screen and direct it's output. I mean, interactivity is great but if no one knows how to use it, what good is it?
On to the Dinosaurs!
No trip to the ROM is complete without seeing the Dinosaur exhibit. Brian pointed out the touch screens used in that exhibit but I'm sure, like me, most of the class was more interested in the dinosaur exhibits themselves.
One really cool dinosaur was the Corythosaurus. Cory, of course got a few pictures and Dan got a picture of him and his dinosaur for posterity.
Onwards and upwards to the sound studio to meet Zak, a Sheridan alumni. Zak explained some of the things they do up in the sound studio, podcasting, soundtracks used in the creation of their videos like the one on Canadian Heritage etc. Being at the back of the group I couldn't quite hear everything Zak was talking about or see what he was doing. But he did tell everyone loud and clear that he loves his job and it is the perfect job for him.
And here's where things started going sideways.... Dan said that his friend Steve Mann's workshop was right across the street from the ROM. He was only off by a mile. A mile we ended up having to walk in the slush and snow....the entire class was soaked from the knees down. Except for Barbara and Dwight who were smart enough to drive.
Steve Mann, invited us in, with his famous head gear taking us all in. He was quick to explain that now that the world has moved into the cyborg age, he is no longer interested in being a cyborg himself, instead focusing on more primordial things, like water.
Steve then showed us his new invention, the hydraulophone. Which is an instrument that uses water pumped through it to create sound. It's kinda cool and Steve even let us play with one he had set up in his workshop. Adrian even played chopsticks, which was pretty cool.
Our host explained to us that he had patented several different types of hydraulophones and that he was in talks to get them in several government buildings, city halls etc. Hydraulophones including the Nessie model pictured above, start at about $10,000. Which is quite reasonable if you have a government budget, but as someone pointed out, Steve is missing out on smaller markets like school playgrounds and daycares. But at $10,000 a pop, he might not care.
After 30 minutes Steve had to go to a meeting so the rest of us filed out of his workshop. Next stop, Silver Snail: Comic and Toy shop! Driven by the awesome Dwight...but that's another story....
So when the day was over, I got driven back to my car by Dan, which was nice, and it would have worked out much better had Sheridan not been in lockdown because of a scary looking tripod.
But the day did eventually end, with some warm socks. All and all it wasn't a horrible field trip, although I could have done without the miserable mile long walk. But the real problem was that the day really didn't have any multimedia pioneering in it at all. ROM director Brian Porter told us himself that museums aren't pioneers, that older technology is the best they can afford. Inventor Steve Mann told us that he was through with the cyborg age and was instead going back to his primordial roots. What Steve created could be called pioneering, but it certainly isn't of the multimedia variety.
Two other points of interest:
And Brian Porter, while not working hard as a director of new media at the Royal Ontario Museum is actually a Secret Agent Man, according to his phone which plays the 007 theme when it rings. :)
Links for Marks:
Royal Ontario Museum Site: Podcasts made by the Sound Studio
Touch Screen Technology: How it Works
Virtual Musuem of Canada: Can't go to a museum? This is the next best thing.
Steve Mann's Fluid Site
Youtube Video on Hydraulophones: First prototype to installation at the Toronto Science Centre
Wednesday, January 23, 2008
Anyway, back to the presentation.
James taught me some really basic things about cell phones that I didn't know, or didn't think enough about to realize it. Like for instance, SMS stands for Short Message Service. I never knew what it stood for, even though I text message Jeff fairly often. Also that there are three types of messages, person to person, computer to person and person to computer. Which made me think of my E-Commerce days, where we talked about Business to business, business to consumer and the very rare consumer to business.
Our speaker also talked about 2D barcoding, or QR Code or Datamartrix. It's a 2D barcode that allows for way more information than a traditional barcode, like ones cashiers scan when you purchase products, which are 1D. Once the information is in a barcode format then a barcode reader decodes the information. Apparently this is crazy big in Japan, but I can see it catching on here too. James even showed us a picture of one of these barcodes at Niagara Falls, some place I've actually been to.
How does that relate to mobile/cell phone technology? Well you can have a barcode reader on your cell phone, so when you're at Niagara Falls you can scan/capture an image of, the barcode, decode it and find out all the information available. And Johnson and Johnson is putting these barcodes on their products as well, so their employees at shipping docks/warehouses can scan the barcode and get more information and have it electronic instead of print outs all the time. Saving the environment one barcode at a time. :)
Anyway, the Falls barcode is part of the Semapedia project, which has barcodes placed around the world which decode into wikipedia pages about whatever attraction the barcode is at. Now, they don't have a huge ton of barcodes out there but they do have a map of the world which shows where they are at the moment which is kinda cool.
Quite frankly though, I would worry that someone would think it's vandalization or garbage, and the stickers would be ripped off buildings/sign posts etc. I know where I live there are graffiti artists that put stickers of their art on different things and they are ripped off. How are the uninformed, untechnological masses to know what the barcode is and what it really means?
Well the first start is to get people decoders. Here are some links to decoders for your cell phone: Kaywa Reader (Most phones including a Java Reader) or QuickMark. Plus you can also generate your own QR & Data Matrix codes on these sites.
Side bar: Captial One allows you to use your own images to create a unique credit card. Wouldn't it be cool if you used a 2D barcode as your image. Then you could create a wiki on yourself or whatever. How funny would it be if you credit card got stolen and the thief scanned the barcode and you could give them a message like, "You thief! You're screwing up my credit score!"
James also mentioned that you can make applications for cell phones with html. But if you want to get fancy, you need:
Flash CS3 professional, adobe device central
Testing device, ie. phone (Nokia is recommended)
Connection to the phone: memory stick/bluetooth/cable
But then he said, that action script 3 isn't supported, only action script 2 and they don't support flash lite 3 and that flash 8 would be best. Which I think, means that the technology we're going to use to create cell phone applications won't run on cell phones. AKA, we're too technologically advanced for the new technology we're learning. Sweet.
Another cool thing that James showed us was Adobe Device Central. Which allows you to select a phone to display your program in, and then create the program and run it and you can click on the buttons to see how they interact with the program you created. Plus you can switch between 10 different phones or more to see how the application looks in them, without buying multiple phones to test things out on. Which is very cool. James suggested going to moblie.processing.org which is about java based application development, through JAR files instead of SWF.
Mr. Eberhardt also talked about the future of cell phones. Which is multi-functional phones, which can call people, produce video, take photos, text message, play radio, play mp3s etc. He posed the question to the class is this a good thing?
Yes and No. Yes it's a good thing because you can do much more with your phone than ever before and you can reach people and information you need from nearly where ever you are. No because if you can do all your work, or a large majority from your cell phone where does work end and personal time begin? Another downside is the all in one printer syndrome. All in one or 3 in one printers came on to the scene saying that they could copy, fax, scan (staple, fold, shred too, on some models) and wouldn't it be wonderful to have a machine that took care of all your office needs? It would be. Except that those types of machines rarely work. They can do their main function of copying but everything else is flaky at best. We already have the same problems on cell phones. Sure they can play music but what type of quality do you get verses a stereo or mp3 player or James' turntable? Texting is already super slow at 10 words a minute. Images and videos taken by phones are usually horrible resolution and quality. So what happens when we have more and more applications? Good question.
To sum up: Mobile technology is immature, flash doesn't have wide spread access, java has a lot of bugs and Canadian download fees are insane. But in the future...it's going to be awesome. And it's just one more reason I should be buying an I-Touch or an I-Phone.
Oh and one last thing before we go onto the links section. James Eberhardt's home wireless network is called the batcave. I love that.
Links for Marks:
QR Code or Datamartrix.
Semapedia with their map of the world
Mobile Device Blog - came up in my research which is kinda cool
Kaywa Reader or QuickMark to decode barcodes and create them
Adobe Device Central
Friday, December 7, 2007
For our latest blog posting Dan brought in Simon Conlin. If you don’t know, and I sure didn’t, Simon Conlin is from Flash in the Can. And what is Flash in the Can you may ask, well it’s a “Do It Yourself” festival for new media designers and developers who will learn the latest in interactive techniques and technologies and how to apply them to their own work.” Sounds cool doesn’t it?
Anyway, Simon showed us YouTube videos to demonstrate the different ways Physical Interactivity, like Gesturetek’s technology can be used. I’ve said before that I didn’t think that gesture technology was all that. But after seeing some of the really cool things from the videos I’m beginning to change my mind.
Simon showed us work from a few different people but it mainly centered on Zack Booth Simpson. This video here, I thought was very cool. Zack talks about creating elements based on the gestures of people. Like shining a light on a screen to make flowers grow or swirling a light to create different motions of colour. I thought that was very interesting because it never occurred to me that shining a light on a screen would or could activate gesture technology.
Zack also talks about being inspired by an anime, to create a pool of water that grew life if it was stepped on but once you leave the life then dies off. In this way people would be motivated to walk quickly, jump around and see how much life they could create. But Zach wanted to show the relationship of giving life and taking it away and the fact that you can’t have everything so he designed it so that if you jump/walk/activate it too much all the life dies away.
Watching that little scene I had a burst of inspiration. I thought it would be cool to do a similar thing except have the life in a pond or in a forest or something grow independently without any outside influence. But when someone does activate it instead of creating life like Zach’s I would want mine to destroy life. And the more you destroyed it the longer and longer it would take to grow back once you had left. Maybe even creating it so that if you jumped on it or activated it so much that nothing would ever grow unless the program was reset. I think that’s the basis of life on Earth. How marvelous would Earth be if humans never existed on it? If we were never around to reek havoc on it. We’re the only creature on the planet that destroys its own habitat to live. I think an installation like that would be really cool in a museum somewhere in an exhibit about life on this planet.
Adam Chapman is also featured on that clip but his pieces are not as interesting to me, especially the one where he created an algorithm so that birds flying in the sky would come together in a pattern to create a symbol or word.
Simon also showed us a website of a company called Meso. Meso does incredible things with huge interactive screen and floors that are manipulated by people. For example we saw a video from a George Michael concert, where the band was walking along this giant environment and activating it as they performed. It’s just incredible. There are some images on that page where it looks like George Michael is standing on an overflowing river of colour. Even I would love to see a concert like that. Although I worry about how much the ticket price would be. Giant interactivity can’t be cheap.
Meso also did a cool project with an Energy Globe which is like replica of the Earth that can be programmed to show the affect of different weather patterns and how global warming is affecting the planet on a global scale. Here’s the link for that. It’s definitely something I’d love to see up close.
In addition the class was shown an interactive screen with musical instruments all over it that allows users to play the instruments. Which is basically what my AV/MP assignment with Jeff is but there’s looks a lot nicer. Thankfully ours is only a prototype.
I think possibly the coolest thing that we were shown in Simon’s presentation was the Adobe Interactivity Wall. Adobe, the creator’s of Creative Suite 3, made a giant wall with a slider and infrared camera. So that when someone walked by it would activate and create a scene based on their movements. It was amazing to see and it would be really awesome to be able to create something like that in the class. Especially if it was a giant group assignment, that way everyone could create an vector image of a robot monkey growing wings or a mushroom shooting spikes and it would all come together on a giant wall. And in terms of marketing, for Adobe it was a brilliant idea to sell their new software because people weren’t watching a static ad on TV, but interacting with the ad itself and not being constantly pushed to by but motivated by their own creativity, amusement and enjoyment.
And finally we have the octopus video. Which is a demonstration of amazing camouflage although I’m not really sure why we were shown it. It would make a cool interactive ocean if when you stepped on a plant an angry cuttlefish or octopus swam out and then moved to the other side of the water to hide in a different underwater plant. It would be cool to watch people trying to hunt down the poor octopus.
If nothing else, the presentation did give me a lot of ideas.
Here are a list of the links in this blog entry:
Flash in the Can
Wednesday, October 24, 2007
Of course we almost missed that experience altogether because the building housing the Gesturetek offices was under construction. We couldn't see it from the car at all, not even the numbers on the building. After driving around in circles, which was actually squares because of one way streets, the bane of my driving existence, for 15 minutes we finally saw a parking spot.
But of course it was already taken! By Killian and her own band of merry travellers. Darn! But at least Jeff and I were no longer alone. We found a spot in the lot across the street and more importantly I didn't have to give the creepily happy parking guy my car keys.
So we started walking...the wrong way. Until a helpful stranger showed us the way. I'd like to point out, Jeff's artist rendering of the helpful stranger since it is a timeless piece of beauty.
Eventually arriving at Gesturetek we were escorted to the back room and the rest of the class.
Vincent John Vincent, President and Founder of Gesturetek and holder of the coolest name in the room, showed us around the lab.
One of the things I found interesting was the different types of cameras used for gesture technology. For instance they have single camera capture systems that allow one point of contact within the matrix. Multi-touch systems which use two cameras allow users to have a point of contact within the matrix for each hand. Multi-touch is also a system created by two US researchers designed to promote less repetitive strain injuries associated with prolonged keyboard and mouse use. Check the links at the end for more on that piece of tech. Anyway, the multi-touch system allows much more creativity in programming and gives the user a lot more control.
Speaking of control, something else I found interesting was the fact that gesture technology can control any size screen from any distance, meaning that they really can bring big ideas into reality.
Photo time! Here a lovely photo of delicious irony taken by Jeff Winkworth, showing the height of technological achievement...and what are we doing with it? Playing pong. It just doesn't get any better than that.
VJV also told the class about the different uses for Gesturetek's technology.
GesturetekHealth is a division of Gesturetek which uses their technology to help with rehab services. Patients who have to under go physical therapy, use gesture technology to make the therapy more fun. According to Vincent John Vincent, hence forth known as VJV, this technology is being used today in 30 different health/rehab companies and in improves the length of time patients spend on their therapy by 33%. Which is quite significant if you think about it.
Weather Services International is another company that uses Gesture Technology. You may not realize it but some Meteorologists use it in their weather forecasts. When they stand in front of a green screen and move their hands they can cue different weather patterns and effects, such as a cold front moving over the region.
Gesturetek has also done a lot of work in Japan. They created a Japanese TV show, called Nick Arcade in which characters ran around on a green screen which put them into an arcade type game which was broadcast on television.
They have just launched a series of games for Japanese cellphones. Apparently there is approximately 100 games that have been developed that require the user to either gesture in front of their cell phone to move pieces in the game or tilt and twist their cell phone to move elements on the screen. This is yet another division of Gesturetek called GesturetekMobile.
One of the games VJV mentioned by name was Katamari Damacy. Which at best is an unusual game. Basically, your father, the King of all Cosmos has destroyed the moon and the stars. That's what you get when you're bored and have unlimited power. Anyway, you are his son who is tasked with recreating the moon and the stars. How? Well you push around around this super adhesive ball called a katamari, you roll it around and around cities and country sides picking up everything from nails to school children to double decker buses. Once the ball is large enough you can take it up into space and create a star. And that's it...so yah the game isn't for everyone.
I did find the field trip interesting but my opinion of gesture technology hasn't really changed. I know that Gesturetek has 1600 offices world wide and are probably raking in the dough. But I don't see the technology as anything more than a passing fad.
It’s cool and “gimmicky” but the practical applications don't really extend to the everyday user. Meteorologists and rehab patients aside who is really using this technology? I know that the deal with the Japanese cell phone company is probably netting Gesturetek millions but I still don't see the value of it. I've played one type of game they mentioned, which is rotating, twisting and turning to make an object on the screen movie. The game I played on Gameboy Advance was called Yoshi Topsy Turvy. And after two levels I found it thoroughly irritating. I find that type of game control very limiting and repetitive.
Look at the Wii. It’s a good family console, lots of fun for parties and groups etc. But what about the everyday use? I like my Wii, but I haven’t played a ton of games on it. I have played games that are specially designed to work with the gesture technology on it. Like Wii sports and Trauma Center: Second Opinion. And that’s only every now and then when I feel like something different. But the everyday games like Spider-man, Paper Mario, Darkness etc, I play on other consoles or ignore altogether. How frustrating, tiring and boring would it be to have to web-sling across New York by swinging your arm back and forth? Playing that same game on X-box 360 I traveled over 200 miles web-slinging. If I had done that on the Wii my arm would have fallen off.
In my mind, gesture technology will never become mainstream. But godspeed to the good folks at Gesturetek for turning it into a lucrative business. And here are some links:
Weather Services International
Gesture Recognition Technology
Expanding Gesture Technology Beyond the Eyetoy
Thursday, October 4, 2007
Immersion Studios is a brisk four floor walk up from the Interactive Multimedia lab. Inside is a busy hive of computer programmers and designers: everyone a geek at heart, as demonstrated by their desk clutter, i.e. toys like: bobblehead wolverine, transformers and other nicknacks.
But what's the real story behind Immersion Studios?
Well for starters its not technology specifically created for gaming, as one might guess. Instead it is used for simulations and programs designed to teach. For instance, the space movie we were treated to today had users frantically clicking to save an astronaut from being killed by parasites. Which sounds like a game, but with the realism of the graphics of the inner body, cell and skeletal structure used, simulations like this can easily be used to teach medical students.
Another project they have created is called Crime Scene Protection. Also a learning tool, this one teaches police officers to find and protect physical evidence at a crime scene. Other projects teach officers how to measure skid marks to calculate car speeds and help with gathering data at accidents and how to safely pull a vehicle off the road.
Immersion also has visualization based learning, like their Viking Mystery and the Ottawa Light Rail project. These types of elements are less interactive in their teaching but still viable as teaching aids.
Another cool project they are working on at the moment is F.A.C.E, Facial Animation Communication Engine. This program allows a computer to mimic the facial expressions of a person into a 3D rendered person on screen. Or it will when it's completely finished anyway. The technology works on 59 different points of facial recognition or control points to determine how to properly mimic a person's facial expressions. The program first starts by detecting the head, then it moves on to detect each region of the face, eyes, lips, nose etc. I'm sure once it's fully function the technology will be amazing, I'm just unsure what it's practical applications would be.
I was especially interested in the programs they used to create all the wonderful visualizations that they showed us. And even more amazed that the program they used for a bunch of their work is one I can get for free.
Ogre according to their site, is one of the world's most popular open source graphic rendering engines. This is definitely something I'm going to check out. Free technology is good technology.
I also noticed another rendering program that was advertised on a wall at Immersion Studios which is called Houdini. It's created by a company called Side Effects Software. Although it's not free or even cheap, it still seems to create some amazing effects.
Finally on Immersion Studio's website they mentioned that they have worked on games for Game boy Advance. Based on the medical graphics from the space video, it got me thinking of a game called Trauma Center: Second Opinion. This game is on the Wii and users get to play a surgeon in a busy hospital. Throughout the game you perform operations, such as removing broken glass from an accident victim's heart, exorisizing a tumor and even treating an infectious disease. Although the graphics are a little cartoony in nature, they are very accurate and life like. The game even gives you real tools of the trade to use such as syringes, scalpels, laser, forceps and an ultrasound. The game itself was challenging but lots of fun but I can see it in a new light now as being a tool for new doctors.
In closing, Immerison studios really does have some awesome technology to show the world. It's just a shame that it's not as grand as it once was.
Interesting links I think you should check out:
Immersion Studio's Site
Ottawa Light Rail Project
Facial Recognition Technology
Ogre 3D Graphic Rendering Engine
Houdini - Side Effects Software
Trauma Center: Second Opinion
Song Ho Ahn's Personal Website