A distributed computing add-on package that uses the Maple technical computation system, the new Grid Computing Toolbox released by Maplesoft (Waterloo, ONT; maplesoft.com) enables engineers to leverage a network of workstations to run complex computations. Users can take advantage of all the hardware resources available, which cuts down on processing time and lets users develop applications not previously possible.
The Grid Computing Toolbox allows you to run Maple computations in parallel from within Maple’s rich technical document environment.
The Grid Computing Toolbox provides an interactive interface to run Maple computations in parallel. As a self-assembling grid in a local network, the Grid Computing Toolbox uses high-level parallelization commands (map, seq, etc.) and offers automatic deadlock detection and recovery.
“The Grid Computing Toolbox allows users to distribute computations across the nodes of a network of workstations, a supercomputer, or across the CPUs of a multiprocessor machine.”
— Laurent Bernardin,
Maplesoft, VP R&D
“The Grid Computing Toolbox allows users to distribute computations across the nodes of a network of workstations, a supercomputer, or across the CPUs of a multiprocessor machine,” said Laurent Bernardin, vice president of research and development for Maplesoft. “This allows for the handling of problems that are not tractable on a single machine because of memory limitations or because it would simply take too long.”
The new Grid Computing Toolbox is part of The Maple Professional Toolbox Series, a set of products that target key applications in engineering, science, and technical application development. It can integrate into existing job scheduling systems like PBS, and is said to be simple to set up. For instance, users can start a server process on each machine on a network and the grid will self-assemble as each node automatically detects the other nodes that are present.
The Personal Test Server allows the user to start a test server on the local machine with just one click of a button. Users can simulate any number of nodes and use the test server to develop and debug parallel applications before deploying them to the real grid.
The Grid Computing Toolbox includes a Personal Grid Server, letting you simulate a grid with any number of nodes on your desktop machine. You can develop and test your parallel applications before running them on the real grid. It uses a generic parallel divide-and-conquer algorithm, and supports heterogeneous networks.
To perform distributed computations, the Grid Computing Toolbox offers an MPI-like message-passing API as well as a set of high-level parallelization commands. Plus, it offers access to all the computational power of Maple. To support developers, Maplesoft has also created MapleConnect to provide a low-risk path to commercializing an engineer’s or scientist’s work. MapleConnect is an "open marketplace" for applications based on Maplesoft products. Once the developer decides what product to develop, its features, and its price, Maplesoft will provide resources to help turn the idea into a commercial-grade product, and will handle sales of that product through the Maple Web Store.
The setup for Maple’s Grid Computing Toolbox shows the many options in place to help you get started.
Maplesoft had previously released the Grid Computing Toolbox as HPC Grid, which was sold as a third-party MapleConnect product. Grid Computing Toolbox requires Maple 11.02 and is available for Windows, including Vista; Linux 32 bit; Linux 64 bit for Opteron (or equivalent); Mac OSX 10.4 (PPC and Intel); and Sun Solaris. Pricing begins at $199 for Maple Grid Computing Toolbox Personal Edition, which allows for up to 8 CPUs in a cluster.
To learn more about the company’s offerings, click here to go to Maplesoft.
Click here to gather details and a datasheet on the Grid Optimization Toolbox.
Click here for details on the Maple Professional Toolbox Series.
Click here to access Mapleconnect.
Click here to go to the Maplesoft Web store.
Click here for details on Maple 11.