MathWorks

The Exotic Physics of Frustrated Magnets

Editor’s Note: This guest blog post is by Dr. Kingshuk Majumdar, associate professor of physics, Grand Valley State University, MI. Dr. Majumdar shares some of his research below, which was greatly facilitated via use of a supercomputing cluster. If you would like to contribute to Engineering on the Edge, please contact us.

Frustrated magnetic materials contain a wealth of interesting magnetic properties. Unlocking the mysteries of these frustrated magnets will not only deepen our understanding of the fundamental physics of these materials, but may also provide clues for potential technological applications in the near future. Therefore, these systems are presently under intense investigation by the physics community.

Besides mass and charge, the electron, an elementary particle within an atom, also has “spin.” Spin, an intrinsic property of electrons, comes in two varieties — “spin‐up” and “spin‐down.” In frustrated magnets, imbalance of these two types of spins results in magnetic frustration. With state‐of‐ the‐art 504 node supercomputing cluster “MATLAB on the TeraGrid” housed in Center for Advanced Computing at Cornell University, I am theoretically investigating the rich and exotic physics of these complex magnetic materials.

Continue reading

SC11 Journal: News for the Supercomputing Conference

I have a full plate of meetings at Supercomputing 2011 this year. Below are some of the points of interest I am learning about along the way that I wanted to share.

Platform Computing
Yesterday I attended the 7:30 breakfast meeting for Platform Computing’s MapReduce. This is a policy-driven workload manager and scheduler that handles mixed types of workloads running on the same cluster. It uses an open architecture to support multiple applications for jobs built with Hadoop MapReduce technology. Applications include Pig, Hive, Java, Oozie, Cumbo, and natively written Java MapReduce programs.

The scheduler can give the high-performance computing (HPC) manager options to schedule job submissions with a Fairshare Scheduler, Preemptive Scheduler, Threshold-based Scheduler or Task Scheduler. It also helps to work with resource draining.

MapReduce sends the application to a Job Controller, which decides what data should be mapped to an input folder and what mapped tasks should go to the local storage. It can split data so only that data that needs to be in the compute schedule goes to the CPU. It also uses Resource Groups to move data between local workstations.

Platform Computing’s MapReduce can allow clusters to be grouped into one large resource. Platform is used as a massive scheduler. The user has control of the job not just when it is in the queue, but while the job is running. Continue reading

 

Become a fan of DE on Facebook

Categories