Tilt Five is an Augmented Reality (AR) system developed by Jeri Ellsworth and a group of other engineers that is aimed at tabletop gaming which is now up on Kickstarter. Though it appears to be a quite capable (and affordable at $299) system based on the Kickstarter campaign, the most remarkable thing about it is probably that it has its roots at Valve. Yes, the ones behind the Half Life games and the Steam games store.
Much of the history of the project has been covered by sites, such as this Verge article from 2013. Back then [Jeri Ellsworth] and [Rick Johnson] were working on project CastAR, which back then looked like a contraption glued onto the top of a pair of shades. When Valve chose to go with Virtual Reality instead of AR, project CastAR began its life outside of Valve, with Valve’s [Gabe] giving [Jeri] and [Rick] his blessing to do whatever they wanted with the project.
Six years later Tilt Five is the result of the work put in over those years. Looking more like a pair of protective glasses along with a wand controller that has an uncanny resemblance to a gas lighter for candles and BBQs, it promises a virtual world like one has never seen before. Courtesy of integrated HD projectors that are aimed at the retroreflective surface of the game board.
A big limitation of the system is also its primary marketing feature: by marketing it as for tabletop gaming, the fact that the system requires this game board as the projection surface means that the virtual world cannot exist outside the board, but for a tabetop game (like Dungeons and Dragons), that should hardly be an issue. As for the games themselves, they would run on an external system, with the signal piped into the AR system. Game support for the Tilt Five is still fairly limited, but more titles have been announced.
(Thanks, RandyKC)
I saw this when it was first announced and am very excited for Jeri’s dev diaries. She has a youtube channel and plans on explaining a lot of the tech behind the glasses so I definitely recommend people check it out and I kind of wish that was mentioned in the article.
That said I’m a little disappointed the demo is some fake rendered footage. I get you need that to show ‘it’s capabilities’ but I would have loved to see even just a stationary high quality image from a camera with the lenses in front of it. That way I could really be confident that what I am seeing is what I will get. Right now the sales pitch feels almost like a video game where the trailer goes ‘Pre rendered footage’ which translates to completely worthless and not at all like what you will get in the final product.
Here’s to hoping it succeeds this time.
I’m just imagining the awesome things this could be used for, beyond board games. Especially teaching/presentations.
They address this on the KS page:
“So some of you are saying “but you showed composited shots in your video.” Yeah, we knew you’d say this. In order to show the group experience of playing games around the table together, we had to make a creative choice. Shooting through-the-lens footage is difficult, and when you add in actors and scenery it’s next to impossible. Here are many of the same shots as you would see them through the lens of the glasses, minus the actors and scenery of course. ”
Followed by a video of “through-lens” footage. It looks good, if a little stuttery/delayed in places.
When recording through one eye of the glasses those artefacts are from the refresh rate of one projector interacting with the refresh rate of the digital camera that is recording. Probably the easiest way to see the problems in trying to show what is there would be to use the video record function of your mobile phone to record a short video clip of a TV at the same refresh rate (or slightly lower) than the TV.
Yeah that makes sense, but I still wonder about performance issues. If you look at the hand-moving-blocks demo at about 50 seconds in the through-the-lens demo there seems to be a fair bit of latency between the hand touching the cubes and the cubes moving. Maybe it is linked to the physics simulations, but it could still be annoying in a game.
Delay of that sort is inevitable its just how long it is, input lag will always exist and in this situation it seems to be more likely based on the performance of the computer its connected to not the tech itself..
I’d bet with Norm from Tested’s reaction to them its not noticeable to the user.
Either way its a very elegant lightweight AR solution and some of the tech in it should make for a much better experience at a good price point. VR/AR otherwise needs a head full of optics to do multiple focal lengths or as is currently the case universally (as far as I know) a smaller heap of optics just to have one fixed focal length (and a beast of a GPU throwing frames at it all the time with out fail).
Not being made of money I’ll be passing on this KS. But I can well imagine how awesome it will be with Fantasy Grounds (which I have been a DM in for a while) and knowing that is in the works really makes me wish I could give it a try.
Oh well I’m dumb for not scrolling down then. The stutter is obviously a refresh rate issue since your filming a video projector essentially, so that’s totally expected.
The delay however can clearly be seen with the block hand demo. My guess is this is a processing issue trying to identify what the hand has “hit” and not a delay from the filming (since that’s not how delay works) so this delay is about what you should expect using it. That said the delay is fairly minor for what they’ve achieved. It’s a lot better then kinect from what they are showing. Also it really depends on who is using it and for what. In the case of D&D instant response time really isn’t necessary, now trying to port games over, well the current delay would be pretty awful and clumsy.
One thing that I feel is a bit disingenuous – the composited footage shows rendered content extending beyond the edges of the game board.
This is impossible with the actual hardware, since there is no retro-reflective background to bounce the image from.
Not wanting to film a marketing video through the headset is understandable, but over-representing the device’s capabilities is pushing it, especially in AR.
The public should be skeptical – Magic Leap burned the industry’s credibility with their marketing videos, and most of the public has never used an AR HMD.
So even though they’ve done literally everything else differently to ML, been upfront about every single limitation of the hardware, talked in detail about how it works, provided public third-party hands-on tests, waited until they had a actual near-complete product before even announcing it, and provided through-the-lens shots upfront… they’re being “disingenuous” because of a couple of videos (that they were also completely upfront about being composited shots) that are intended only to give a layperson a quick (*not* perfect) idea of what the device is for?
Listen, I’m excited about the product. It’s an incredibly clever design. I think it’s going to be the best combination of publicly available, accessible, and “just works” that AR has seen to date.
But yes, the video shows their product doing something that it cannot do, and I call that disingenuous.
I wouldn’t go so far as to say it’s deliberate deceit, but it is disappointing Jeri let those compositing errors slip through on her watch. She’s better than that, and knows better than that.
I’m very glad to see this concept progressing. But I’m disappointed the top photo shows AR objects visible outside the perimeter of the board. Obviously faked.
Where? Also, you can’t take a ‘photo’ of this anyway, as there’s nothing there to photo so of course it’ll be ‘fake’. However, any fakery should be taken by projecting what would be seen via the glasses, I’ll agree.
Hunh? Just look at it. All the objects above the top edge (far side) of the board should not be visible. And they most emphatically will not be opaque and block the view of the (e.g.) carpet behind.
And of course you could take a photo of it. Just put a camera where your eyeball goes. The video even shows how.
Sure they can be visible, you’re compositing video on the frame of the glasses. What you can’t do, is have someones arm show in front of those objects, then they’ll appear to be just be floating on top of someones arm instead. I.e. if someone reached across the table, all the projected objects would show up on them/on top of them instead of behind. (Technically it can be done but would be more difficult).
What would be impossible is to do this without the board which gives the glasses the distance to the playing surface.
Eh, I think that image is more artistic than anything. You don’t directly see things in the glasses. There’s a small projector on top of the glasses and the board is a retroreflector. The image bounces off of that and returns directly to the person that projects it.
I might be wrong, this is from memory reading about it several years ago.
Watch the “Shot through the lens” video on the Kickstarter page. The image is only visible on the board itself because that is the projection surface, not the frame of the glasses. It also shows that arms and objects will obscure the image (which is the intended effect).
I think you don’t understand the what the board does, it is a retro reflector, think of it like a very fancy projector screen that reflects light directly back to the projector with a small offset.
The projector in the glasses points at the board, and the board reflects that projection back to the wearer’s glasses/
Only the projection that strikes the board and is thus retro reflected back is visible to the wearer of the glasses. The wearer of the glasses can only, by nature of retro reflection, see the projection that their glasses projected.
The amount of light reflected from a non-retro-reflective surface, like say, a person’s arm, is not going to be enough to perceive as an image to the wearer, it’s scattered far and wide. The wearer can only see what is reflected by the board.
So in the artistic impression, everything that is not “in front of” the board, is impossible.
So, the way this system works is pretty clever, but it’s not like what you described.
The glasses themselves don’t contain a display. Rather, they have a mounted projector which projects its image onto the game board. The game board is retro-reflective, so it shoots the projected image right back where it came from, which is close enough to the eye that the user can perceive the image. The projector’s pose is calculated relative to the game board by an image sensor mounted next to the projector and an IMU. This pose is used to render with correct perspective.
So really, in the actual product there is no compositing being done. For the rendered content to reach your eye, it must first hit the game board to be bounced back. The content will be rendered in 3D with depth and all, but it will be clamped to the game board at all times.
Check out the tested video i posted below, whole video is good, you can skip to 19:45 where she addresses what you’re talking about here.
Try actually reading the kick starter page, all of this is explained or blindingly obvious. Your assertion is also incorrect. The AR glasses can render whatever, provided you have the glasses on.
Try actually thinking and comprehending what it’s doing, and what it’s capable of. It simply can’t produce an image that appears outside the retroreflective area (technically, it can, but it’s very, very faint).
Jeri acknowledges the composited video, and that’s totally fair. What’s totally an error, and misleading, is the compositing oops that shows objects appearing outside the board area.
Adam Savage’s Tested has a video on this project https://youtu.be/Jse-GwkcYgI
They film through the lenses of the glasses.
Tested Channel on youtube has a great discussion with Ellsworth, Q&A detailing the technical challanges that were overcome in the refinement of the glasses’ design.
https://www.youtube.com/watch?v=Jse-GwkcYgI
Jeri already delivered pretty much the very same Kickstarter 4-6 years ago (technicalillusions/castar-the-most-versatile-ar-and-vr-system) but company was lured by no other than Andy Rubin(*) into a VC circus and the rest is rather predictable – your product is worth more unreleased, and whole purpose of the company is finding a bigger fool to buy you out. Board fired technical staff, bought few game studios, ran out of VC money and moved on to pillage something else.
You can listen to the whole story straight from horses mouth in this podcast https://theamphour.com/394-jeri-ellsworth-and-the-demise-of-castar/
I have zero doubt about the hardware or deliverability of KS, just hope they didnt touch “smart silicon valley money” this time around.
*) former Apple/General Magic employee, founder of two companies with very successful exits (danger, android). The very best kind of VC you could hope for.
The $20k bubbles mood board is probably the quickest way to understand how Silicon Valley treats startups. They assume your idea will never go anywhere, so everyone tries to leech as much of the investment money as possible while it lasts. https://twitter.com/jeriellsworth/status/983071750068977665
This tech would make an extremely realistic simualtor, for driving, flying, space travel,etc…
I wouldn’t be surprised if a large company offered to buy up this tech.
Brilliant! I was wondering when will AR enter D&D worlds 😁
All these ideas are actually already available for users to create themselves. For example with the BlippAR platform, we are creating educational games and materials by overlaying book characters with their quotes, questions, clues…
It’s going to develop rapidly I believe 😁
Imagine learning about history through AR tabletop game 🤩