Virtual reality headsets enforce an isolated experience, cutting us off from people nearby when we put one on our head. But in recent times, when we’re not suppose to have many people nearby anyway, a curious reversal happens: VR can give us a pandemic-safe social experience. Like going to our local community theater, which is an idea [Tender Claws] has been exploring with The Under Presents.
VR hype has drastically cooled, to put it mildly. While some believe the technology is dead and buried, others believe it is merely in a long tough climb out of the Trough of Disillusionment. It is a time for innovators to work without the limelight of unrealistic expectations. What they need is a platform for experiments, evaluate feedback, and iterate. A cycle hackers know well! The Under Presents is such a platform for its corner of VR evolution.
Most VR titles are videogames of one genre or another, so newcomers to the single-player experience may decide its otherworldly exploration feels like Myst. A multi-player option is hardly novel in this day and age, but the relatively scarcity of VR headsets means this world is never going to be as crowded as World of WarCraft. This is not a bug, it is a differentiating feature. Performers occasionally step into this world, changing the experience in ways no NPC ever could. A less crowded world makes these encounters more frequent, and more personal.
Pushing this idea further, there have been scheduled shows where a small audience is led by an actor through a story. As of this writing, a run of a show inspired by Shakespeare’s Tempest is nearing its end. The experience of watching an actor adjusting and reacting to an audience used to be exclusive to an intimate theater production. But with such venues closed, it is now brought to you by VR.
How will these explorations feature in the future of the technology? It’s far too early to say, but every show moves VR storytelling a little bit forward. We hope this group or another will find their way to success and prove the naysayers wrong. But it is also possible this will all go the way of phone VR. We are usually more focused on the technical evolution of VR here, but it’s nice to know people are exploring novel applications of the technology. For one can’t exist for long without the other.
Steganography involves hiding data in something else — for example, encoding data in a picture. [David Buchanan] used polyglot files not to hide data, but to send a large amount of data in a single Twitter post. We don’t think it quite qualifies as steganography because the image has a giant red UNZIP ME printed across it. But without it, you might not think to run a JPG image through your unzip program. If you did, though, you’d wind up with a bunch of RAR files that you could unrar and get the complete works of the Immortal Bard in a single Tweet. You can also find the source code — where else — on Twitter as another image.
What’s a polyglot file? Jpeg images have an ICC (International Color Consortium) section that defines color profiles. While Twitter strips a lot of things out of images, it doesn’t take out the ICC section. However, the ICC section can contain almost anything that fits in 64 kB up to a limit of 16 MB total.
The ZIP format is also very flexible. The pointer to the central directory is at the end of the file. Since that pointer can point anywhere, it is trivial to create a zip file with extraneous data just about anywhere in the file.
Continue reading “Shakespeare In A Zip In A RAR, Hidden In An Image On Twitter”
You know Halloween is coming around when the tweet reading skulls start popping up. [Marc] wanted to bring the Halloween spirit into his workplace, so he built “Yorick”. In case you’re worried, no humans were harmed (or farmed for parts) in the creation of this hack. Yorick started life as an anatomical skull model, the type one might find in a school biology lab. Yorick’s skull provided a perfect enclosure for not one but two brains.
A Raspberry Pi handles his higher brain function. The Pi uses the Twitter API to scan for tweets to @wedurick. Once a tweet is found, it is sent to Google’s translate server. A somewhat well-known method of performing text to speech with Google translate is the next step. The procedure is simple: sending “http://translate.google.com/translate_tts?tl=en&q=hackaday” will return an MP3 file of the audio. To get a British accent, simply change to google.co.uk.
The Pi pipes the audio to a speaker, and to the analog input pin of an Arduino, which handles Yorick’s lower brain functions. The Arduino polls the audio in a tight loop. An average of the last 3 samples is computed and mapped to a servo position. This results in an amazingly realistic and automatic mouth movement. We think this is the best part of the hack.
It wouldn’t’ be fair for [Marc] to keep the fruits of his labors to himself, so Yorick now has his own Livestream channel. Click past the break to hear Yorick’s opinion on the Hack A Day comments section! Have we mentioned that we love pandering?
Continue reading “Alas, Poor Yorick! I Tweeted Him”