SpaceClaim acts as the nucleus of the conceptual design process, integrating data from electrical engineers, industrial designers, and engineering.
Anyone who has seen demos of Windows 7 or Microsoft Surface technology will recognize parallels with the futuristic man-machine interface featured in the film Minority Report. And those of you who have fallen in love with the brilliant iPhone interface may be wondering when we will see similar technology refreshment on our technical desktops.
“It would be amazing if you could design products with the virtual displays used by Tom Cruise in Minority Report,” says Sandy Joung, senior director of Desktop Product Marketing at PTC, “but from our perspective it’s likely to be a long time... With technology like the Wii, it is possible to do something in the near term to marry that environment with CAD, but in terms of mainstream adoption it’s likely to be a while.”
New Interfaces On the Way
Something much closer on the horizon is Windows 7, which is being demoed currently. It has simple motions for controlling pan and zoom using multi-touch techniques with both hands. This kind of interface is expected to go live in technical applications around 2010 and it is being presented as one possible next-generation interface for Windows.
Scott Harris, the recently retired co-founder of SolidWorks and creator of the Cosmic Blobs interactive 3D modeling tool for children says, “I am a firm believer in higher-bandwidth interactions. The cord on a mouse is ‘very thin’ and much better ways to tell the computer what to do are long overdue.”
Of course, Harris is absolutely right. We’ve been staring at the same glass drawing board since CAD was invented. The current mechanical interface of the mouse and the keyboard has only been marginally disturbed by the arrival of other peripheral devices like the SpaceBall and all its more modern incarnations, but promising technologies such as the thought-control headset from Emotive may well change all this.
CAE analysts use SpaceClaim to rapidly iterate design concepts to drive the detailed design process and to validate that the detailed CAD models meet specifications.
As we move forward with the gaming generation, the quality of the user experience will become more important. The iPod is a classic illustration of this. You can use pretty much any Mp3 player to get the job done, but the phenomenal successes of the Apple products show the value of the “experience” over the pure function of a competent device. It’s arguably the same with engineering software. Playing to this theme, new products such as Icona’s Aesthetica provide the ability to balance form (aesthetics) and fit (tolerance) to optimize what really matters to a customer — the (visual) experience and ultimately perceived quality.
The need for industry-wide change will be more marked as younger people take up positions of responsibility in the world of engineering. The next generation is accustomed to more engaging and collaborative experiences. They’re used to social networking as a means of collaboration (e.g., Facebook and Bibo and immersive experiences from numerous games on the Wii, Xbox, and the PS3). More importantly you can already see a route to the future in the intersection of gaming and collaboration as reflected in the success of multiplayer games, Second Life, and the plethora of Internet-enabled experiences. Numerous software companies including the likes of IBM, Siemens PLM, Dassault Systemes, and Autodesk are tracking these trends, hoping to determine the roles of these technologies in the future of engineering development.
CATIA V5 provides an integrated suite of MCAD, computer-aided engineering, and computer-aided manufacturing applications for digital product definition and simulation.
I would argue that tomorrow’s design environment has less to do with revolutionary technology — and more to do with the way technology is applied to solve design or engineering problems. This view is informed by recent discussions we’ve had with companies in the automotive sector. There is a sea change happening in the way they design — “we don’t want our people to be component designers, we want our people to design cars.” Anything that helps them design cars in a holistic way is good. This requires a change in the focus of the development of engineering software.
Products such as Siemens PLM Software’s Synchronous Technology potentially offer more design “freedom.” Arguably it has set a new standard in modeling technology. The ability to mix parametric with explicit (direct) modeling and interact with models using history-based techniques or not, is unique, and it offers significant performance and productivity gains when used in certain contexts.
Direct modeling itself is not a recent development — consider products from companies such as CoCreate (now PTC), Kubotek, and IronCAD. Yet direct-modeling tools often use feature-recognition algorithms to identify geometric relationships between elements of a model, and then use a smart set of editing commands to directly modify the surfaces associated with those inferred features. In short, feature recognition plus direct editing. There seems to be a consensus in the industry that a more appropriate name for this type of editing is “feature-inference modeling.”
Design Just One Dimension of Experience
What’s different about Synchronous Technology is the hybrid combination of feature-based modeling and feature-inference modeling. “Synchronous” refers to the geometry solver, not the modeling process.
Sequential solvers are the oldest and simplest type. They have the limitation of order dependency. Siemens’ synchronous solver overcomes the order dependencies that have challenged history-based MCAD programs solving explicit and inferred constraints at the same time. The synchronous solver doesn’t use a history tree, but rather holds user-defined constraints in groups associated with the surfaces to which they apply.
As far as MCAD users are concerned, Synchronous Technology means they can can take an existing model (native or imported), make changes to it with no limitations from the history tree, and add new intelligence (constraints and driving dimensions) again, with no limitations from the history tree. This facilitates MCAD model reuse, without having to completely fathom the original modeling process.
But the design is only one dimension of the experience. A product has performance (physical, software, electronics), appearance, scent, texture, and sound. When developing a complete product you can argue that the design environment should entail a holistic view of the product. This is starting to happen, but we’re still limited by the lack of integration and — importantly — the contextual and knowledge-based automation of the tools we use.
3DLive from Dassault Systemes leverages enterprise 3D and PLM information to bring intellectual property to life in a single, immersive interface.
Mashups and More
To some extent, this automation is being addressed by “vertical industry” versions of software. There is undoubtedly value in re-packaging and re-configuring existing software assets to focus on a market or technology. A number of companies are looking to the next step; creating toolsets that embed function as well as form to deliver software to make, for instance, plastic parts; not just a simple technology integration of CAD, CAE, and plastic flow analysis software.
By focusing on exactly the functionality you need, the user experience is enriched by avoiding the all-things-to-all-men functionality that just gets in the way of accomplishing the task. And having specific product or industry functionality makes it much easier to take the holistic viewpoint and always look at the big picture, rather than having a myopic component-level approach.
Among the many interesting trends born of the Internet generation is the concept of mashups. Based on some of the same principles underlying a service-oriented architecture (SOA), a mashup is a web application that combines data from more than one source into a single integrated tool. An example is the use of cartographic data from Google Maps to add location information to real-estate data, thereby creating a new and distinct application that was not originally provided by either source.
Now it’s not too much of a stretch to imagine a situation where an application could be developed as needed from various sources of functionality, from different vendors perhaps, within a custom immersive and engaging environment. You’d be able to configure the product by dragging and dropping the functionality you want and even build your own interface as opposed to buying a single complex application that tries to be everything to all men. And similarly, your requirements might change from day to day, so that you can reconfigure your product as many times as you need.
Open Environments Still in Process
This has to be one of the ways to address differing functional requirements in the future, but the challenge is how the vendors will cope with it from a commercial point of view. To make this work, vendor revenue — and possibly sales — models must change.
Who will provide the services for the various technical apps is not clear, and how you choose between different vendors’ competing functionality will be interesting. It requires much more of an open mind on open systems to be able to do anything like this.
The future software environment will be a lot less about geometry and more about the process and solving product issues such as form, feel, and function. It will become more about design collaboration where multi-party interaction will be seamless, and the complexities of the software will be hidden from the user while being replaced with environments that play to the emerging generations’ preferences.
Read part 1
San Jose, CA
San Rafael, CA
Los Angeles, CA
Kubotek USA, Inc.
San Francisco, CA
Siemens PLM Software
New York, NY
Allan Behrens is the director of Cambashi Ltd., UK-based analysts and consultants on the use of industrial IT. He can be reached at email@example.com. Send comments about this article via e-mail sent to DE-Editors@deskeng.com.