Tech topic: historic change in computing performance growth

69 posts / 0 new
Last post
Thu, Oct 27, 2011 - 11:00pm
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

The target should be 1000s of cores per chip

  • The target should be 1000s of cores per chip, as these chips are built from processing elements that are the most efficient in MIPS (Million Instructions per Second) per watt, MIPS per area of silicon, and MIPS per development dollar.

The Landscape of Parallel Computing Research: A View from Berkeley

https://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html

Sat, Oct 29, 2011 - 10:48am
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

Some quotes from Prof. Patterson

David Patterson at Berkeley https://www.amazon.com/David-A.-Patterson/e/B000APBUAE has been one of the best at describing the problem before this NRC report shown at the top was published. (Note that I'm not saying anything about having a solution.) In addition to the Berkeley tech report just above, here is a slideshow from Patterson and some pasted quotes from it.

https://www.usenix.org/event/usenix08/tech/slides/patterson.pdf

The Parallel Revolution Has Started:
Are You Part of the Solution or Part of the Problem?

Dave Patterson
Parallel Computing Laboratory
U.C. Berkeley
June, 2008

A Parallel Revolution, Ready or Not

PC, Server: Power Wall + Memory Wall = Brick Wall
⇒End of way built microprocessors for last 40 years
⇒New Moore’s Law is 2X processors (“cores”) per chip
every technology generation, but ≈ same clock rate
- “This shift toward increasing parallelism is not a
triumphant stride forward based on breakthroughs …;
instead, this … is actually a retreat from even
greater challenges that thwart efficient silicon
implementation of traditional solutions.

The Parallel Computing Landscape: A Berkeley View, Dec 2006
- Sea change for HW & SW industries since changing
the model of programming and debugging

You can’t prevent the start of the revolution
- While evolution and global warming are
“controversial” in scientific circles, belief in
need to switch to parallel computing is
unanimous in the hardware community
- AMD, Intel, IBM, Sun, … now sell more
multiprocessor (“multicore”) chips than
uniprocessor chips
- Plan on little improvement in clock rate (8% / year?)
- Expect 2X cores every 2 years, ready or not

- Note – they are already designing the chips that will
appear over the next 5 years, and they’re parallel

But Parallel Revolution May Fail
- 100% failure rate of Parallel Computer Companies
- Convex, Encore, Inmos (Transputer), MasPar, NCUBE,
Kendall Square Research, Sequent, (Silicon Graphics),
Thinking Machines, …
- What if IT goes from a growth industry to a
replacement industry?

- If SW can’t effectively use 32, 64, ... cores per chip
⇒ SW no faster on new computer
⇒ Only buy if computer wears out
⇒ Fewer jobs in IT industry

Why might we succeed this time?
- No Killer Microprocessor
- No one is building a faster serial microprocessor
- Programmers needing more performance have no other option than parallel hardware
- Vitality of Open Source Software
- OSS community is a meritocracy, so it’s more likely to embrace technical advances
- OSS more significant commercially than in past
- All the Wood Behind One Arrow
- Whole industry committed, so more people working on it
- Single-Chip Multiprocessors Enable Innovation
- Enables inventions that were impractical or uneconomical
- FPGA prototypes shorten HW/SW cycle
- Fast enough to run whole SW stack, can change every day vs. every 5 years
- Necessity Bolsters Courage
- Since we must find a solution, industry is more likely to take risks in trying potential solutions
- Multicore Synergy with Software as a Service

Sat, Oct 29, 2011 - 11:24am
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

Prof. Patterson presents his slides

The Parallel Revolution Has Started: Are You Part of the Solution or Part of the Problem?
Sat, Oct 29, 2011 - 1:40pm
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

An earlier lecture by Patterson

Computer Architecture is Back: Parallel Computing Landscape
Sat, Oct 29, 2011 - 4:40pm
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

iPhone 4S CPU can do 1GHz but they run it at 800 MHz

Apple’s upcoming iPhone 4S will be a reminder that CPU clock speed isn’t everything.

Even though the iPhone 4S runs the same dual-core A5 CPU as the iPad 2, Apple has apparently reduced the chip’s speed from 1 gigahertz to around 800MHz, reports Anandtech. But that doesn’t stop the iPhone 4S from blazing past all other phones — many of which have CPUs well over 1GHz — in benchmark tests.

https://venturebeat.com/2011/10/11/iphone-4s-cpu-slower-than-ipad-2-still-faster-than-all-android-phones/

Sat, Oct 29, 2011 - 8:11pm
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

The heterogenous cores approach

I was having a pint with a coworker the other day and he was showing off his new Android phone saying "Look at all the system info they show you", like you can see what percentage of battery has been consumed by different activities. I didn't look too closely but I did notice one particular entry: "System Idle: 48%". Obviously the phone is not completely idle during those times you are not using it. This is the next thing to try: use a slower core for always-on background tasks, and use the faster core for snappy user response and heavier tasks. These articles and the previous one are recent examples of how the power wall is acting as a speed limit for processors.

https://www.eetimes.com/electronics-news/4229926/ARM-unleashes-Little-Dog-on-Intel-s-tail

ARM unleashes 'little dog' on Intel's tail

Sylvie Barak

10/20/2011 6:27 PM EDT

While ARM is certainly making a rather big deal over its "big-little" A7/A15 core combo, its partners may not be completely sold on the idea.

While the notion of combining a powerful core with a much smaller, low-power one will offer a certain amount of flexibility, it’s neither a particularly innovative approach, nor is it necessarily enough to threaten rival Intel as much as ARM probably hopes.

ARM licensee Nvidia has actually been using a similar approach for a while now, using a mixture of process technologies on a single chip. ARM’s move certainly gives validation to Nvidia’s methodology –things like unveiling a ‘secret’ 5th core in its Kal-El processor-- but it’s yet to be seen whether the approach will pan out in the long run.

https://www.anandtech.com/show/4991/arms-cortex-a7-bringing-cheaper-dualcore-more-power-efficient-highend-devices

ARM's Cortex A7: Bringing Cheaper Dual-Core & More Power Efficient High-End Devices by anand[dot]shimpi[at]anandtech[dot]com (Anand Lal Shimpi) on 10/19/2011 12:31:00 PM
Posted in SoC , ARM , CPUs , Cortex A7 , Cortex A15 , smartphones , Tablet

How do you keep increasing performance in a power constrained environment like a smartphone without decreasing battery life? You can design more efficient microarchitectures, but at some point you’ll run out of steam there. You can transition to newer, more power efficient process technologies but even then progress is very difficult to come by. In the past you could rely on either one of these options to deliver lower power consumption, but these days you have to rely on both - and even then it’s potentially not enough. Heterogeneous multiprocessing is another option available - put a bunch of high performance cores alongside some low performance but low power cores and switch between them as necessary.

NVIDIA recently revealed it was doing something similar to this with its upcoming Tegra 3 (Kal-El) SoC. NVIDIA outfitted its next-generation SoC with five CPU cores, although only a maximum of four are visible to the OS. If you’re running light tasks (background checking for email, SMS/MMS, twitter updates while your phone is locked) then a single low power Cortex A9 core services those needs while the higher performance A9s remain power gated. Request more of the OS (e.g. unlock your phone and load a webpage) and the low power A9 goes to sleep and the 4 high performance cores wake up.

Mon, Oct 31, 2011 - 9:34pm
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

The "big data" justification for a different computing future

This article simply mentions that getting answers quickly using huge amounts of data will necessitate architectural changes. The last two paragraphs refer to the computer architecture of today and what the future may be like.

Big Data, Speed and the Future of Computing

...

The Von Neumann model, named after the mathematican John von Neumman has been the template for computing for six decades. It has a processor at the center, surrounded by memory and storage, and it is the basis of computing today. But the “Von Neumann bottleneck” describes the slowdown in performance that comes from that approach, with the processor tightly managing data input and output.

“We’re going to have to move from processor-centric computing to data and memory-centric computing with processors sprinkled in it,” Mr. Kelly said.

https://bits.blogs.nytimes.com/2011/10/31/big-data-speed-and-the-future-of-computing/

Wed, Nov 2, 2011 - 1:55am Thieving Corp.
UGrev
Offline
-
NY
Joined: Jun 14, 2011
168
884

I absolutely agree with this

I absolutely agree with this statement:

“We’re going to have to move from processor-centric computing to data and memory-centric computing with processors sprinkled in it"

I've been moving towards trying to use object databases instead of relational databases. Why? because my data is wholly represented by any object instance. But there are performance issues with large sets of data (hierarchical or otherwise). 1

1. Disk I/O is a bottle-neck for read operations on data in this format. Not that it's any better in a database "server", but object db's seem to suffer a lot here

2. Memory consumption is high in ODB's

These aren't really CPU heavy types of things.. these are issues related to I/O and memory R/W. I need MUCH faster I/O and quite honestly I need to cache data in memory currently, so I need lots of it. I think part of the answer is moving away from HDD's and moving towards solid state drives. From there, something has to happen to handle data-busing better.

Wed, Nov 2, 2011 - 7:59am UGrev
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

UGrev wrote: I absolutely

UGrev wrote:

I absolutely agree with this statement:

“We’re going to have to move from processor-centric computing to data and memory-centric computing with processors sprinkled in it"

I've been moving towards trying to use object databases instead of relational databases. Why? because my data is wholly represented by any object instance. But there are performance issues with large sets of data (hierarchical or otherwise). 1

1. Disk I/O is a bottle-neck for read operations on data in this format. Not that it's any better in a database "server", but object db's seem to suffer a lot here

2. Memory consumption is high in ODB's

These aren't really CPU heavy types of things.. these are issues related to I/O and memory R/W. I need MUCH faster I/O and quite honestly I need to cache data in memory currently, so I need lots of it. I think part of the answer is moving away from HDD's and moving towards solid state drives. From there, something has to happen to handle data-busing better.

Within the current architectural orthodoxy, SSDs are the "next thing to try". For example, see this:

https://gigaom.com/cloud/solidfire-gets-25-million-to-fuel-flash-fueled-cloud-storage/

https://venturebeat.com/2011/10/31/solidfire-raises-25m-to-boost-cloud-provider-agility-and-performance/

However, given that we have reached the limits of the traditional CPU architecture (as described earlier in this thread), even if you speed up the I/O, you will still be limited by single thread performance (that will not significantly improve) and energy costs.

Your final sentence that I bolded implies an architectural change is needed. But today's computer companies are not addressing this need; they are mostly focused on maintaining their existing revenues by trying to keep up the illusion that their products are increasing in performance over time. This is why I predict the breakthroughs will likely come from other than the established companies. I don't see any of them publicly questioning the Von Neumann, sequential stored program, clocked, processor-vs.-memory architecture with an innovative solution. The crisis/opportunity is still on.

Fri, Nov 4, 2011 - 9:19pm
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

AMD Cuts Workforce So It Can ‘Cut Chip Power’

Chipmaker AMD announced some big layoffs on Thursday, saying it needs to cut costs in order to push into new markets and stay ahead of rivals.

The company is laying off about 1,400 people, a bit more than 11 percent of its workforce of 12,000. The cuts will come from all parts of the company, says Drew Prairie, a company spokesman. “It’s from vice president all the way down to individual contributors.”

AMD hired a new CEO, Rory Read, in August, and he’s trying to push the company into new markets. Building chips that don’t gobble up a lot of power is the current fixation in the chip industry. This helps out in both mobile devices such as the iPhone and in the servers that power massive data centers.
https://www.wired.com/wiredenterprise/2011/11/amd-workforc/
Fri, Nov 4, 2011 - 10:52pm
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

New AT&T cooling policy heats up debate

New AT&T cooling policy heats up debate

Rick Merritt

11/2/2011 5:50 PM EDT

SAN JOSE, Calif. – AT&T will stop buying systems next year that don't conform to new policies about air cooling and it is starting a move to liquid cooling. The decisions mark a new milestone in how rising use of mobile data and video is creating power and heat issues in data centers and central offices.

"In 2012 AT&T and Verizon will no longer accept equipment with side-to-side airflow," said Bon Pipkin, a network engineering manager at AT&T, speaking in a keynote at the Advanced TCA Summit here.

...

The move to use of some form of refrigeration comes as some big data center operators are moving away from air condition and adopting ambient air. Facebook, for example, announced last week it is opening its first European data center in Sweden, in part to use ambient air and save power sand cost by eliminating air conditioning.

AT&T will publish a document next year describing its new thinking about cooling the more than 7,000 central office facilities it currently manages. "We will ask vendors to price an optional DRC, and we have talked to half a dozen vendors about this already," Pipkin said.

The document will state a requirement that vendors who use side-to-side air flow provide mechanical workarounds such as ducts and snorkels to conform to AT&T's front-to-rear cooling approach.

The speech marked the first time a carrier said it would not purchase equipment unless it conforms to a specific cooling approach, said one veteran of the ten-year old ATCA standard who asked not to be named.

https://www.eetimes.com/electronics-news/4230303/New-AT-T-cooling-policy-heats-up-debate

Sat, Nov 5, 2011 - 11:09am
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

How Your iPhone Chip Will Reinvent the Internet Data Center

How Your iPhone Chip Will Reinvent the Internet Data Center

Jonathan Heiliger is the kind of guy you want running your data center. He’s been stringing together servers since the late 1990s, when he co-founded the high-end data center company Frontier GlobalCenter. Until just a few months ago, he was vice president of technical operations at Facebook, building out the social network’s infrastructure as it skyrocketed from 50 million users to 750 million.

In June 2009, Heiliger shared his thoughts on the latest server chips from Intel and AMD at a conference in San Francisco. And what he had to say wasn’t nice. Hardware vendors had failed companies like Facebook, he said. And the performance gains that their latest generations of processors were supposed to deliver? Facebook simply wasn’t seeing them.

He was annoyed with the server-makers, too. “You guys don’t get it,” he said. “To build servers for companies like Facebook and Amazon and other people who are operating fairly homogeneous applications, the servers have to be cheap and they have to be super power-efficient, and that doesn’t just mean putting in a really highly efficient power supply, but it means going all the way down basically from starting at the wall outlet all the way to the processor and figuring out how to optimize that power path.”

Heiliger did concede that one company was good at building computers: “Google has done a tremendous job of building and designing their own servers,” he said. Not exactly a ringing endorsement for the status quo in the server industry.

Today, Intel dominates traditional enterprise server rooms, and businesses generally buy their servers from the likes of Dell and HP. But after Heiliger’s harsh words for the traditional players and similar noises from others in the know, some think that the big name server and server chip manufacturers will soon come under threat from upstarts that build a very different type of system — one that’s specifically designed for the “cloud” services such as Facebook and Google and Amazon.

One threat may come from server chips based on the ARM architecture — processors much like those used in the iPhone, the iPad, and so many other mobile devices. The key to ARM’s success on smartphones is power — or, more accurately, lack of it. ARM designs low-power chips that work well enough without burning up a lot of juice. As Jonathan Heiliger made so clear, that’s just as important in the Internet data center.

https://www.wired.com/wiredenterprise/2011/11/low-energy-servers/

Calxeda’s ultra low-power EnergyCore server chip takes cues from smartphones

November 1, 2011 | Devindra Hardawar

Ushering in the era of low-power servers, Austin, Texas-based Calxeda is today announcing its EnergyCore ARM-based processor, the first ever chip capable of running an entire server at a mere 5 watts.

The EnergyCore server-on-a-chip uses 90 percent less power (just 1.5 watts while idle), takes up 90 percent less space, and is half as expensive as traditional server solutions, according to the company. Since it’s based on ARM technology, Calxeda’s chips are taking a cue from the low-power, yet highly capable, ARM processors used in smartphones and tablets.

“ARM is to processors what Linux is to operating systems,” Calxeda VP of marketing Karl Freund told VentureBeat in an interview yesterday, referring to the way companies can build innovative technologies on top of ARM’s original designs. The EnergyCore chips are based on ARM’s quad-core Cortex A9, and they run at speeds between 1.1 gigahertz and 1.4 Ghz. But the company also added in an 80-gigabit fabric switch, which will allow for high data throughput, as well as an energy management engine.

The complete EnergyCore server node also includes 4 gigabytes of RAM. Calxeda’s chips are 32-bit, but the company says it will be ready to jump to 64-bit chips once ARM’s designs are complete.

Calxeda says its chips are best suited for target applications like storage and file serving, or web apps. You won’t see much of an advantage using EnergyCore servers for heavy duty video encoding, but for most other server uses it’s an ideal balance between low-energy usage and a decent amount of computing power.

https://venturebeat.com/2011/11/01/calxeda-low-power-energycore-server/

ARM unveils 64-bit architecture

Sylvie Barak

10/27/2011 3:17 PM EDT

SANTA CLARA, Calif.--ARM Holdings has announced that its next-gen ARMv8 architecture will include the firm’s first 64-bit instruction set, pushing ARM-based processors into new segments of the consumer and enterprise markets.

Speaking at ARM TechCon 2011 in Santa Clara, Calif., ARM Chief Technology Officer Mike Muller said the new v8 architecture would consist of two main execution states: AArch64 and AArch32, with the former introducing a new A64 for 64-bit processing instruction set, while the latter would continue to support ARM’s existing instruction set.
...
Taking a dig at rival Intel Corp., Muller said that though the world had been done very well by Moore’s Law, “there’s nowhere else to go.” Muller said while more cores was certainly a good trend, ARM believed the future would be found in having “lots of power efficient cores rather than some more power inefficient cores.” Muller maintained that overall, the problems came down to the system level, not the cores.
...
In his keynote address, Muller also spoke about the increased need for more heterogeneous computing, system partitioning, and solving the problems of energy efficiency in devices as systems became increasingly complex and memory intensive.

Today, everything is an energy constrained system,” he said adding, “the solutions are all about building heterogeneous solutions.”

https://www.eetimes.com/electronics-news/4230160/ARM-unveils-64-bit-architecture
Fri, Nov 11, 2011 - 9:40pm
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

Computer Performance - Peak CPU

Summary of the Computer Performance Situation - Peak CPU

I want the community to be well informed of what is going on in the computer field so that no one is misled by the hype surrounding so-called "exponential" trends in technology. These expert sources document the end of the computer performance growth to which we all have been accustomed. Just because it's been good for a while does not mean it will last forever.

Remember when they used to brag about more and more MHz anf GHz? Funny you don't see/hear that anymore. Now it's all about power consumption at the data center and mobile devices.

CPUs cannot run faster without increased costs and/or limits of power and memory access. The reality of the Power Wall and the Memory Wall and the Instruction-Level Parallelism Wall means we have hit the speed barrier of single thread performance. Peak CPU is the new reality today.

Multiple cores and multiple threads (as they exist today) are not satisfactory solutions. The Power Wall will still limit the number of cores and the speed of each individual one. How long can they keep up doubling the number of cores? There is still the Memory Wall there, which only gets worse with increased thread execution. It's tricky to get any performance advantage in general by using threads, it requires careful tuning. Threads are ok for special situations, like reading from a "firehose" input such as a price feed; they are not worth the trouble for general purpose programming.

The hardware industry seems at a loss as they cannot see or reason beyond the clocked circuits and sequential thread of instructions. Software people have a difficult time programming for non existent machines. It's a self reinforcing cycle.

"It would not be prudent" to expect much faster computers (of today's architecture), or to invest on anything dependent on the broken trend. Any speedups within the current computational model are only going to be more expensive - power costs, clouds, etc. Cloud -- adding more networked boxes -- is the only way to make things go faster today, so obviously it's one of the hottest trends.

Wed, Nov 16, 2011 - 7:41am
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

Keiser Report 11/15 CPU Hedonic Adjustments

Thu, Nov 17, 2011 - 10:18am Thieving Corp.
bbacq
Offline
-
Ottawa, ON
Canada
Joined: Sep 13, 2011
242
1619

@thief: Thanks!

Wow, nice job, thief. Really resonates with me. Xty said I had to get over here and see your stuff. Haven't caught up with the whole thread yet (it will have to be a bacqground tasq).

I think I still have my transputer literature from the mid-80s, and I designed and built a snooping cache-coherent multiprocessor supercomputer in the late 80s for a little R+D company in Toronto. Made the last page of the dhrystone benchmarks, along with Cray, Amdahl et al (and that was with just one processor:-). Based on the AMD 29000, rest its soul (apologies to Tracy Kidder). The project was killed before being commercialized, partly for the reasons Patterson states in his lecture. I have always been too far ahead in my thinking (and it shows in my trading performance:-(

But, and I hope it doesn't sound *too* arrogant, I think "many-core" may just be an interim, incremental approach. It is one model, but I am afraid we will fairly shortly (decades, like last time?) hit the limits there as well, as it is not really that different than we have done to date. Transactional memory as the next big advance? Come on. Look out at the world. See any transactional memory out there? Nope. Memory is something that is distributed in the relations between elements in a complex system in Nature. Computation is the interaction of these elements. Many-core is still "in the box". I wonder where Carver Mead is these days....

I think we have just about enough compact compute power already to saturate human I/O. The hard problems we now have are to model the weird and non-linear processes that occur in Nature, and we already have an architecture that works well, the neural network. I am frankly blown away that self-organized learning approaches weren't even mentioned by Patterson in the talk I just watched (thanks again).

For real advances in processing-power-per-<watt, transistor, etc>, I think we will have to abandon "programming" and resort to "teaching" our computers. Patterson seems to focus only on the algorithmic approach to solving problems, and completely ignores the inherently parallel approach of self-organized neural nets. Big gap on his part, IMHO.

Nonetheless, fascinating, thanks, and the "brick wall" is very real, and should makes us all pause to think whether this great bull run hasn't run out of just debt to fuel it, but also engines of growth like "free" compute power.

Have you read "The future of the Internet - And how to avoid it!" by Jonathan Zittrain? I don't like a future where I have no compute power, it is all in the cloud, and all I get is a little internet appliance to get to my data and processors. Very, very scary. The golden goose is getting hobbled...

best regards, and I hope to comment more later, again thanks.

bbacq

Fri, Nov 18, 2011 - 8:36am bbacq
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

@bbacq; You're welcome

And thanks for the response. I was starting to feel a little lonely in here. I look forward to more discussion of this subject. This is something that I've been following for a few years now. It was a rather obscure subject for a while but the problems are now so obvious that it has started to leak into the mainstream. The committee report has the clearest explanation of the issues so far. The committee's membership really drives the point home and cuts through the individual companies' hype: for the Von Neumann architecture there will be no more speedups like there used to be without significantly higher costs, and none of today's players has a credible solution.

Adding to the economic implications outlined in the report, the point Max Keiser made with Mike Maloney about the hedonic adjustments (arguably proper until 2004) means that this change is causing inflation to be understated! Maybe someone should point Max to this thread so he can start learning about the "new normal" of computer performance growth.

More later.

Sun, Nov 27, 2011 - 6:13pm
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

Data Furnaces Could Bring Heat to Homes - NYTimes.com

Turn On the Server. It’s Cold Inside.
By RANDALL STROSS
Published: November 26, 2011

TO satisfy our ever-growing need for computing power, many technology companies have moved their work to data centers with tens of thousands of power-gobbling servers. Concentrated in one place, the servers produce enormous heat. The additional power needed for cooling them — up to half of the power used to run them — is the steep environmental price we have paid to move data to the so-called cloud.

Researchers, however, have come up with an intriguing option for that wasted heat: putting it to good use in people’s homes.

Two researchers at the University of Virginia and four at Microsoft Research explored this possibility in a paper presented this year at the Usenix Workshop on Hot Topics in Cloud Computing. The paper looks at how the servers — though still operated by their companies — could be placed inside homes and used as a source of heat. The authors call the concept the “data furnace.”

They acknowledge that it is more likely that data furnaces, if adopted, would be placed first in basements of office and apartment buildings, not in individual homes. But as a “thought-provoking exercise,” the authors give homes the bulk of their attention.

https://www.nytimes.com/2011/11/27/business/data-furnaces-could-bring-heat-to-homes.html

Sat, Dec 10, 2011 - 12:18am
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

GPUs? Not enough to overcome the power wall

https://www.eetimes.com/electronics-news/4230308/GPUs--low-power-pave-path-to-exascale-supercomputing-

GPUs, low-power pave path to exascale supercomputing

Sylvie Barak

11/3/2011 12:58 AM EDT

SAN FRANCISCO--The supercomputing race is accelerating, but the path to exascale computing faces serious challenges in terms of power efficiency, cost and data security.

...

Indeed, with supercomputers and HPC becoming ever more mainstream and relevant across a plethora of industries from nuclear physics to climate modeling to banking, the questions of efficiency, density and cost are becoming particularly pertinent, the panel agreed.

“Most supercomputer buyers are being constrained more by budget than by anything else,” said Claunch, noting that despite the great strides in computer modeling and the great elasticity in HPC, the drive for faster, smaller, cheaper systems remains strong.

Moore’s Law has paid off in terms of transistor density improvements, the panel agreed, but more work needs to be done in terms of power per watt, especially as more power translates into more FLOPS of compute speed.

Power budgets for supercomputers are rising, but ultimately a machine needs to be affordable enough to run in order to be effective.

...

Power, floor space constraints


Despite the money being invested in HPC, however, many potential supercomputer clients are still being constrained by power restrictions or even floor space limitations, said Claunch, adding that he believed tremendous benefits could be gained through increased efficiency.

“We are really taking the power efficiency challenge very seriously,” said Supermicro's Clegg. “Power supplies are being looked at more carefully,” he added, noting that Supermicro aims to get the majority of its power to a platinum level of efficiency, or some 94 percent plus. “Power and cooling is the biggest problem,” Clegg re-iterated, noting that it was becoming increasingly difficult to achieve a favorable cost-to-benefit ratio with cooling costs increasing almost exponentially as performance increased.

"There’s not enough cheap power to get us past the exascale level unless we make some serious architectural changes,” he said.

Power is the main challenge on the path to exascale computing,” agreed Anthony Kenisky of Appro.

Chuck Moore, AMD corporate Fellow and technology group CTO, said those looking to achieve exascale would have to factor in a million dollars per megawatt. “As good as Bulldozer, or Interlagos is, they are not good enough; they’re not going to get us there," he added.

Moore predicted it may take AMD until at least 2019 or 2020 before its chips would offer a level of programmability sufficient to take customers to exascale level, noting that GPUs would factor heavily into the equation.

Indeed, the majority of panelists agreed that the use of GPUs in supercomputers is becoming an integral part of the segment’s forward momentum, though the consensus was that CPUs would in no way become redundant in the space as a result.

“GPUs are a very important part of heterogeneous computing in terms of alleviating the bottlenecks,” said Clegg adding that graphics processing was “right on the cusp” of becoming accepted as the predominant way to get heterogeneous computing going. While that was a significant achievement for the GPU space, however, Clegg was quick to temper his comments with caution. “Will the space become 100 percent heterogeneous and GPU based? I don’t think so, because there are some applications that are suited to it and some that are not,” he said.

...

The question, Williams said, is whether the industry can make it easier for application programmers to access the GPUs and harness all of that power effectively.

...

Sat, Dec 10, 2011 - 12:42am
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

The turdish solution ;-) and uh-oh

HP Dreams of Internet Powered by Phone Chips (And Cow Chips)

Internet companies are learning that small towns are often great places to set up data centers. They have lots of land, cheap energy, low-cost labor, and something that may be a secret weapon in the race toward internet nirvana: cow dung.

For Hewlett Packard Fellow Chandrakat Patel, there’s a “symbiotic relationship between IT and manure.” Seriously. He’s thought about this a lot since working on a paper he published on the subject last year. He’s been inundated with ideas from farmers and dairy associations, and recently, he went to a farm in Manteca, California, to visit a 1,200-cow dairy farm that’s producing a half-megawatt of energy by burning methane created by manure.

Patel is an original thinker. He’s part of a group at HP Labs that has made energy an obsession. Four months ago, Patel buttonholed former Federal Reserve Chairman Alan Greenspan at the Aspen Ideas Festival to sell him on the idea that the joule should be the world’s global currency. Greenspan listened. That kind of obsession could change the face of the data center.

...

The dung-fired data center is closer than even Patel first thought. There are lots of places where they’d work, he says. You don’t need to build any special generators or equipment, and cows are everywhere. “We found many sites where it is totally doable today,” he says, “You can go anywhere from South Dakota to Wisconsin to Virginia. Even between Chicago and Indiana there are lots of dairy farms.”

...

Data centers produce a lot of heat, but to energy connoisseurs it’s not really high quality heat. It can’t boil water or power a turbine. But one thing it can do is warm up poop. And that’s how you produce methane gas. And that’s what powers Patel’s data center. See? A symbiotic relationship.

...

To the Moon, Data Center

Now, theories developed by HP Labs researchers like Patel are being pushed further with Project Moonshot. HP wants to radically cut the cost and size of data centers by cramming thousands of cool-running chips into a server rack. HP thinks its Project Moonshot servers will use 94 percent less space than the servers you typically see in data centers and they’ll burn 89 percent less energy.

Financial house Cantor Fitzgerald is interested in Project Moonshot because it thinks HP’s servers may have just what it takes to help the company’s traders understand long-term market trends. Director of High-Frequency Trading Niall Dalton says that while the company’s flagship trading platform still needs the quick number-crunching power that comes with the powerhog chips, these low-power Project Moonshot systems could be great for analyzing lots and lots of data — taking market data from the past three years, for example, and running a simulation.

“At that point, it’s really a throughput problem. It’s, ‘how efficiently with these enormous data sets can you run lots of different experiments and simulations?’” Dalton says. “It’s about, ‘how many cores can we put in a system’ and ‘is it easy to manage,’ and ‘is it easy for users to come in and say, you know what, I’m going to run twice as many things as I did yesterday.’ Can it scale?”

https://www.wired.com/wiredenterprise/2011/11/the-data-center-of-the-future/

Sat, Dec 10, 2011 - 12:50am
Thieving Corp.
Offline
-
Washington, DC
Joined: Jul 14, 2011
148
538

No mention of speed in entire article, all about power

AMD Betting on Power Consumption By QUENTIN HARDY

| November 7, 2011, 1:12 pm 1 Comment

Advanced Micro Devices, the biggest producer of computer semiconductors after Intel, announced Thursday that it was cutting 10 percent of its staff, some 1,100 people.

The cuts come across all parts of its business, and are expected to save about $128 million in operating expenses by the end of 2012. The company said another $90 million in operational savings would come from “efficiencies” the company would institute. A “significant portion” of those saving will be spent on low power chips, emerging markets and supplying components for cloud computing, A.M.D., which is based in Sunnyvale, Calif., said in a statement.

This is the usual kind of thing a company says when it is losing market share, which A.M.D. is. That same day, the research firm IDC said that in the third quarter Intel had 80.2 percent of the worldwide market for so-called x86 chips, which are used in personal computers, mobile computers and computer servers, while A.M.D. had a 19.7 percent market share, down from 20.6 percent.

But the cuts do seem like part of a real plan of attack, based on the increasingly important area of power consumption. In notebook computers and mobile devices, power is essential in providing longer battery life. Power also matters in cloud-based data centers, where tens of thousands of servers can consume lots of energy.

A.M.D.’s new focus on power consumption holds the promise of laptops that run 50 percent longer than current models and a 30 percent performance improvement in power consumption on computer servers. Further out, most of what we see on a computer board will be on a single chip, about 1.6 inches on a side.

Chuck Moore, chief technical officer of A.M.D.’s technology development group, said a new chip, code-named Bulldozer, “is designed from the bottom up to take advantage of low-power technologies.” Each chip has conjoined cores, the big management portions of the chip, which share some real estate and architecture. There are monitors on the chip that judge how large a computing load is current, and whether it requires a lot of power, or a little. “We’re plowing new ground here,” Mr. Moore said.

https://bits.blogs.nytimes.com/2011/11/07/amd-betting-on-power-consumption/