A week ago, Diane Bryant, senior VP and general manager, Intel Data Center Group, introduced the new processor lineup — E5 2600/1600 v3 series — at an event at the Terra Gallery in San Francisco, California. The processors, Intel says in its press release, are “central to enabling a software defined infrastructure,” what Intel calls “the foundation for cloud computing.”
Intel writes, “With up to 18 cores per socket and 45MB of last-level cache, the Intel Xeon E5-2600 v3 product family provides up to 50% more cores and cache compared to the previous generation processors.”
In a presentation showcasing possible applications, Intel offers statistics showing that the E5-2600/1600 v3 processors give significantly better performance over their predecessors. LS-DYNA simulation software, for instance, shows up to 50% faster on E5-2697 v3 compared to E5-2697 v2. Furthermore, MSC Nastran Software shows up to 46% faster, and ANSYS Mechanical up to 38% faster on v3 compared to v2. →']);" class="more-link">Continue reading
A couple of hours after noon on Tuesday June 4 in Asia, or an hour before midnight Monday in the Pacific Time zone, Intel is debuting its fourth-generation Core architecture, codenamed Haswell. The big splash is set to occur at Computex in Taipei, Taiwan, at Taipei World Trade Center Nangang Exhibition Hall. But many critical details about the Haswell — its power efficiency and mobile-friendliness in particular — have already been made public long before by Intel executives themselves. Here are a few revelations gleaned from conference previews in the last two years: →']);" class="more-link">Continue reading
Ever seen three thoroughbreds heading for the same finishing line, but running on different tracks? Watch AMD, Intel, and NVIDIA going after the high-performance computing (HPC) market. This week, Intel entered an official name into the race, Intel Xeon Phi. The first product to feature Intel’s many integrated core (MIC) architecture, Phi is expected to ship with more than 50 cores. →']);" class="more-link">Continue reading
Skilled programmers who can sneak into the GPU and execute their parallel jobs belong to an elite group. They are “Ninja programmers,” as AMD corporate fellow Phil Rogers call them.
Rogers, who delivered the keynote at this week’s AMD Fusion Developer Summit, believes GPU computing should be available to a broader audience, to the common programmers who make a living churning out codes in C, C++, JAVA, and Python. In fact, Rogers may even object to the term GPU computing. If it were up to AMD and Rogers, GPU and CPU computing could be one and the same, fused together into a Heterogeneous System Architecture (HSA). →']);" class="more-link">Continue reading
On March 6, as commuters in the West Coast begrudgingly joined the morning’s rush hour traffic, Intel unveiled what could be the solution to heavy traffic in the Cloud.
At 9 AM Pacific, Diane Bryant, the newly appointed vice president and general manager of Intel’s Datacenter and Connected Systems Group, took the stage at the Contemporary Jewish Museum (San Francisco, California) to launch the Intel Xeon E5-2600 family, described by Intel as “the heart of a flexible, efficient data center.” →']);" class="more-link">Continue reading