Cloud and Multitouch CAD/PLM = Engineer’s Nightmare?
Share

What I learned this week … was sparked by a conversation with a friend from the industry over a drink last night. We were discussing the cloud, PLM, multitouch, and IT in general. To be fair, there were other topics of conversation, but he is one of the people that I really respect for his insight into technology. We were discussing my thoughts on PLM in the Cloud, when it finally struck me. Are we going to ruin the design process for experienced engineers by hampering their real-time interaction with the system? Are we heading in the right direction for tomorrow’s engineers?

What Am I Talking About?

Work with me for a minute, this conversation was after only one beer so I think it makes a lot of sense. We were talking about what kids today will expect in the user interface of the future. We were talking about how our kids talk on their headsets and use their game controllers so naturally, doing things we don’t even understand. They are pushing combinations and series of buttons in rapid succession to make things happen in their game – in their virtual world. Then it struck me – why am I so excited about multi-touch and user interfaces that help replicate the real world? Isn’t the whole point of using a computer to go beyond what you can do manually? To super-enable your abilities?

OK, back to CAD and PLM.  Multitouch, 3D manipulation, and motion interfaces are cool. We all saw Iron Man, and we have seen demonstrations of multi-touch CAD. Now I am asking “so what?” OK, I love multitouch (and I want an iPad). But I have a tablet PC with a touch-sensitive screen, and how often do I pull my hands off of the keyboard to touch the screen (hint, no fingerprints on it)? I don’t even like to take my fingers off of the keyboard to grab the mouse, so I have learned a lot of shortcut keys and typeahead tricks. Why? I don’t want to replicate getting a blank piece of paper out of my desk, writing a report on it, making copies, manually distribute it to colleagues for review, and then file it in a file cabinet. The real world is much less efficient than my virtual computer world, so why replicate it in my user interface? OK, we all know the answer. It reduces the learning curve, and it makes interaction more intuitive. But for the experienced user I am going to call that assumption into question (translate as you will).

For the experience user – particularly for the people that grew up using Xbox controllers to manipulate their virtual world in ways they can’t dream of interacting in the real world – we need to do better. Don’t make them touch the screen, take advantage of the fact that they have ten fingers that can all act independently. Give them a motion-sensitive Wii/Xbox-type of controller that they can do ten things at a time with. Track their eye motion. Read their brain waves. The point is to most effectively translate and extend the ideas in the designer’s mind to the system. For the first-time user, multi-touch makes sense. For marketing presentations, the same. For a  day-to-day, interactive interface between an engineer’s fast-moving brain and their high-powered computing equipment it has to be fast and efficient for the experienced user – and that doesn’t necessarily mean natural or intuitive. Particularly when the definition of “intuitive” changes as more of the Xbox generation is sitting in front of the CAD system.

What Does This Have to do with The Cloud?

OK, if you are still with me I appreciate it. I know this has gotten long, and I haven’t even touched on the cloud yet. I will make this brief. I pointed out two types of concerns in my post on PLM and the cloud. One set of concerns was corporate, the other was performance for the user. Let’s relate the concepts above to the real-time performance of an engineer. A lot of the buzz around CAD in the cloud has discussed the challenge of rendering graphics rapidly and getting them back to the engineer. That is a big concern, and I have seen in posts like Josh Ming’s post on SolidSmack about SolidWorks on the cloud that progress is being made.

But what about input performance?  If the goal is to make the human-machine interface as efficient as possible and not distract the engineer from innovating, there can’t be a lag between action and reaction. Part of that lag time is computing/rendering responses. The other is capturing what they are doing. This is where I get concerned about lag times in the cloud. Maybe I need to look back at my son’s Xbox experience and just get over it? But I still have a lingering concern about maintaining real-time user-machine interfaces through the Cloud. I know a lot can be done client-side on the PC or workstation, but I still have to wonder if we are heading the right direction for the real design jocks. Maybe it is too much to ask engineers to learn that level of interaction with their systems, but won’t the Xbox-controller-wielding generation expect that, and won’t it be intuitive to them? If X-A-B-Y-LR-LR-X means pass the football in their game, why couldn’t they learn that means create a thumbnail of my 3D model and check it into the PLM system? Then, I am confident that powerful computing infrastructure (in the cloud or elsewhere) can execute on that.

Implications for Manufacturers

I realize that I may not have given you much that is actionable today, so I will leave you with a thought or two to ponder. All of the new UI ideas are cool, and there are huge benefits for companies to move applications to the cloud. But try before you buy. In your environment. With your infrastructure. And your people. And keep the capabilities of bright, highly talented, gaming savvy, trained, dedicated engineers in mind as you evaluate future user interfaces. Multitouch will have great uses in engineering software, and cloud computing has great promise. But let’s be careful what we ask for so we don’t hamper our future innovators. And for goodness sake, let’s make sure we don’t make them put their hands on the screen unless it is really helping them do something more natural (like sketching) that they can’t do better with an Xbox controller.

So those are my (somewhat random) thoughts, I hope you found them interesting. Do you agree? I didn’t, if you did let us know about it.

  • http://lifeupfront.com Jeff Waters

    Great points, Jim. I would say that cloud computing apps are only useful if the timing between action/reaction is instant. Consider another application genre, CRM. The first time I moved from a local CRM tool to a cloud-based one, I was initially put off because it…. was…. so…. slow… for… the… data…. to…. update… after… I… pushed…. the… submit… button.

    That issue has gone away with faster computers, faster broadband access, and probably better backend cloud apps. Salesforce.com, Evernote, Zoho, and other apps are great examples of this.

    I suspect it’s going to take a whole new level of broadband speed to make that instant response possible for the types of interactions required for CAD and CAE. But, I could be wrong (hope so!)

    On the positive side, I have to believe that 5 years from now I’ll look back at the broadband speed available today and say, “Remember the dark old days when it took 5 minutes to download a large iTunes video podcast? How the hell did we get any work done back then?”

    As for multitouch. I think we’ll see it become a bigger part of all applications. But I would expect it to become a bigger part of gaming before we see it really take off for engineering apps.

    Frankly, I believe the real IO device of the future is a simple hat that senses neural activity. The first generations of that will act like multitouch in that you are still selecting operations (sphere tool->select center->define size->accept). The big breakthrough, though, will be for you to imagine a sphere shape, and it just shows up on the screen at the scale you are thinking of…

    • http://www.tech-clarity.com Jim Brown

      Jeff,
      OK, sign me up for the mind-helmet device. While we are dreaming, let’s connect it to some artificial intelligence / innovation software that senses you thinking of a sphere and suggests an alternate shape based on design context/criteria. Maybe a Triz-enabled helmet?

      Great contribution, thanks.
      Jim

  • Tiago Santos

    Hi Jim
    I agree with Jeff on his opinion about cloud computing. So far the main problem that I had working with PLM was precisely with the network connections ending up being the bottleneck of the whole system.
    I think, any new technology , isn’t a nightmare for anybody unless they make it so.
    In terms of future I would look at the new voice control technology from google being used in the Nexus 1. Or some upgrade on the existing space mouses, which are quite convenient for CAD software.

  • http://www.plm.automation.siemens.com/en_us/ Nik Pakvasa

    Jim

    Great post! There is much buzz about ‘cloud’ which sometimes is disconnected from the reality. Please don’t get me wrong. I am great believer in ‘cloud’…someday everything will be on ‘cloud’, including CAD. But not yet., certainly not CAD. And yes I want that sexy iPad. Where can I sign up?

    The most successful enterprise software on “cloud” today is saleforce.com. It is what I call a transactional application. CAD is not a transactional application. It is very interactive. It requires to down load large amount CAD data instantly. So there are two key issues we need to overcome for CAD system to be practical on cloud – first one is the perfomrmance of down loading CAD data and the second issue is real-time interaction with CAD models – the problem you have so nicely articulated. And then there is the perennial problem of network bandwidth and uptime. There is very nice and timely article with very appropriate headline – “As Devices Pull More Data, Patience May Be Required” in NY Time about mobile network bandwidth issue. (this may not apply to cable/dsl connections?).
    http://www.nytimes.com/2010/01/28/technology/28overload.html?scp=1&sq=network%20bandwidth&st=cse

    Regards

    Nik

    • http://www.tech-clarity.com Jim Brown

      Tiago,
      I love your comment that any new technology isn’t a nightmare unless they make it so. That is really all I am trying to say: Be cautious and make sure it will work for the people that are driving innovation in your company. I believe all of these problems can be solved, and in some instances manufacturers may be ready today. Thanks for summing it up so nicely!

      Jim

    • http://www.tech-clarity.com Jim Brown

      Nik,
      Great job pointing out the NY Times article. Maybe a good idea to do some load testing on the network in addition to testing an individual workstation!

      Always a pleasure to hear from you!

      Jim

  • http://www.linkedin.com/in/tdennis Tord

    Hi Jim et al.

    My experience with cloud games still requires me to load the game on my local PC and then just the instructions get passed through to my netbuddies of where “I” am in the environment and what I am doing. Similarly I see the first gen “CAD in the cloud” keeping the instructions of how the model is made and passing that to my computer to replicate. And vice versa. Instead of the entire CAD model.

    Second gen will require changes in the coding, maybe not in how the CAD part is created but in how it is put together. For example I may use parametric operations to author a part but it is stored as boolean primitives (like virtual legos). Our maybe the cloud keeps multiple tessalated surface models of each step in the CAD history and sends me the appropriate level based on what I am doing to the model.

    As for multi-touch, come by my cube and see the fingerprints on my non-touch monitor :-) I can’t wait for a virtual chainsaw to make cuts to a stubborn part.

    - Tord

    • http://www.tech-clarity.com Jim Brown

      Tord,
      I will send you some monitor wipes that will take care of those fingerprints.

      I expect that cloud applications will focus network traffic on inputs and outputs and leave the rest of the application and information on the servers. As I heard a long time ago when client-server started being replaced by web-based applications – “I want my mainframe back.” IT liked the control of the mainframe. Nobody loaded a random program that crashed the operating system on the client because it was a dumb green screen. All of the important configuration and maintenance work was controlled “in the shop” by IT. So the more the data and the software are away from the user, the easier it is to control. I suspect it is probably safer on a hosted server than on hundreds of local users’ hard drives.
      Anyway, back to the apps. What we need on the client side is the ability to sense and respond what the user is communicating (mouse, touch, mind-control helmet, etc.) and provide feedback to the them. That seems like a dream to most people that have to support the infrastructure. Just make sure the users have a network connection and a browser, and the rest is handled in a controlled environment. How the technology will shift to make sure the inputs/output feedback loop is efficient and responsive I don’t know. But that (to me) is a big key to making this work. But my coding days are a few years behind me at this point, I am sure there are others far more qualified to figure it out.
      All the best,
      Jim