An Analyst’s Take on the Future of Smart Surface Technology


Microsoft’s most recent entry in the visual collaboration space has generated a lot of interest, and almost as much controversy.

While the existing specs and feature set of the Microsoft Hub have been covered at length, I wanted to take a few minutes and share a few things that I think are missing from the design. To be fair, a lot of my dream features may not be feasible with today’s existing technology and infrastructure. With that in mind, this piece is not so much a critique of the actual Microsoft Surface Hub, but a look forward at what the product category should evolve towards.

The Ultimate Smart Surface of the Future

Operating System
The ideal product would use MY choice of operating system. If I walk up to a smart surface it should recognize me (from my wearable or my phone) and immediately offer my operating system of choice. If I live off my iPad, then it should be iOS. If I consider my Win7 desktop to be my home base, it should present me with Win7. It isn’t up to the device to choose the ultimate operating system. The device should be a transparent portal to my personal internet and computing experience. Ideally, there would be a lot more operating systems than the few we currently have to select from. I already warned you that my requests may not be feasible, I understand the technology challenges of building a piece of hardware capable of natively running multiple operating systems. But I am presenting the ideal, and ideally the smart surface of the future should not impose any choices, including OS, on the user.

User Interface
When I use a smart surface of the future it should bring up my desktop, just as I see it on my home office PC. I am actually already moving towards this, as I have my desktop itself saved in the cloud on One Drive. This is my virtual workspace. I should be able to leave a Powerpoint sitting on my desktop, and access it immediately as I walk up to any smart surface. Forget about wirelessly connecting my laptop to share a slide deck. The big screen on the wall should just BE my desktop, with my working files immediately accessible in the same manner as if I was sitting at my home office.

If you live off your iPad, rather than your office PC, the smart surface would bring up your iPad home screen. It would work not just like a huge iPad, but like a huge version of YOUR iPad. Again, the point is that the device shouldn’t be imposing anything on the user. It should be a non-device. It should just be a piece of glass connecting you to your personal internet/computing workspace.

Some may worry about privacy, but you should already be treating your virtual desktop the same way you treat your physical desktop. If someone walks into your office, they can see what is sitting on your physical desktop. You don’t leave anything out that shouldn’t be seen. Those of us who host webcasts are already in the habit of keeping private things off the desktop and it is a good model to follow.

File Storage
The ultimate smart surface of the future would have to heavily rely on cloud storage. If it is going to truly deliver a personalized and full featured experience to everyone, it needs to be able to allow each and every user to access all of their personal files. For someone like me, who already has all of his working files in the cloud, this should be an easy matter. The fact that many people still use local storage makes this another feature with a feasibility issue.

The matter of program support is particularly challenging. Ideally, I should be able to run any program I own on any smart surface I approach just as seamlessly as when I run the program on my home PC. Supporting this may be impossible. You can’t have every program in the world pre-installed on the surface. A virtual machine approach may provide an answer, but it will require a lot of development to create a true solution to this issue.

Smart surfaces should be made to fit for a variety of uses and locations. Organizations and their architects should be able to decide what the right sized (and shaped) piece of glass for any given meeting room would be, rather than try to design a room around a surface product available in only two sizes.

This is yet another feature that should work based on user preference. If I like using actual markers, I should be able to draw on the glass with actual (erasable) markers and it should have the ability to capture and share anything I draw. If I like to draw with my hand, it should already know this and work just as I expect it to. The user shouldn’t have to select a method of annotation, it should naturally know each user’s preference via communication with the user’s wearable or mobile. Each user should be able to just walk up to it and start annotating in their own way.

Device Sharing
The functionality of device sharing is very much in demand right now. Wireless share solutions are finding market success as meeting presenters and attendees love being able to quickly bring their files up on a shared screen. With the smart screen of the future, the entire concept of device sharing shouldn’t be needed and may not even make sense. If, upon walking up to a smart surface, it presents me with my actual personal desktop and all my files, there is no need to share anything from my laptop.

As long as I am dreaming, why not ask for the holy grail of videoconferencing. True eye contact. The screen itself has to be the camera. It doesn’t matter how well you position a camera at the top or the sides of the screen, you won’t get eye contact unless someone is trained to look at the camera. People look at the middle of the screen, so we have to come up with a way for people to give good eye contact while they are looking right into the middle of the screen.

If someone wants to use the surface to present to a remote audience, there should be a second camera facing the surface. Again, another major technical challenge. A camera on a huge selfie stick coming out of the surface would provide the right angle, but be very distracting to local meeting participants.

User Controls
Once again, it should change depending upon the user. Whether you use a mouse/keyboard, hand gestures, vocal commands, or touch screen controls, the choice should be yours. As you approach the smart surface it should be in communication with your wearable or mobile and finding out how you like to control your computing portals. Every enabled piece of glass in the world should immediately become your personal virtual world as you walk up to it. There should be no learning curve or setting changes or multiple sets of commands to master.

Is all of this far too much to ask, or is it the inevitability of the IoT? I would like to think that many of these items will be in place in the reasonable future, although some of them may be just too far fetched for some time.


About Author

David Maldow is the Founder & CEO of Let's Do Video and has been covering the visual collaboration industry, and related technologies, for over a decade. His background includes 5 years at Wainhouse Research, where he managed the Video Test Lab and evaluated many of the leading solutions at the time. David has authored hundreds of articles and thought pieces both at Telepresence Options, where he was managing partner for several years, as well as here at Let's Do Video. David often speaks at industry events and webinars as well as hosting the LDV Video Podcast.

Leave A Reply