Intel’s Insanely Tiny Processor Roadmap: "Clear Path" to 10nm Chips

via Gizmodo by matt buchanan on 7/3/08

Think Intel’s breakthrough 45-nanometer chips are impressive stuff? Intel thought at one time dipping below 100nm would be miraculous, but Intel exec Pat Gelsinger says that “today we see a clear way to get to under 10 nanometers,” and it’ll be within the next 10 years.

The next die shrink is the 32nm Westmere chips next year, followed by 14nm a few years later and then the crazy sub-10nm chips after that. But they’re probably going to have to make use of something like carbon nanotubes or spintronics to get below 10. The result of all that processing power, says Gelsinger, will be “a dramatic restructuring of the user interface.” Yes! I’ve always wanted true 3D computing goggles.

Demand for Intel’s Atom already outstripping supply?

via Engadget by Nilay Patel on 4/30/08

There’s a ton of upcoming laptops and devices based around Intel’s Atom processor, and it looks like all the early interest is causing that best of all possible problems for the chipmaker: it’s gotten too many orders. Intel told the WSJ that it’s planning on producing “millions” of Atom chips this year, but that it’s “seeing better-than-expected demand” as production begins and that it’s “we are working quickly to address it.” Still, it looks like manufacturers are expecting a shortage to last for a while — ASUS predicted that supply would be constrained until the third quarter during its quarterly conference call, for example — and various Chinese trade publications have reported the same. That’s definitely not encouraging news, and with AMD’s Puma and VIA’s Isaiah nipping at Atom’s heels, Intel might want to kick things into a higher gear.

Intel patents cosmic ray detectors on-a-chip. What a relief.

via Engadget by Paul Miller on 3/8/08

Filed under:

That great perpetrator of worldly ills, the cosmic ray, has at last met its match. Intel has patented the concept of an on-chip detector of cosmic rays which would auto-correct for soft errors caused by the cosmic ray’s interference. Apparently Intel is concerned that cosmic rays — those perky particles from space that blast through the Earth’s atmosphere and tamper with your precious bodily fluids — are going to become “a major limiter of computer reliability in the next decade” as chips get smaller and smaller. The rays have already been proved to interfere with electronics in small ways, so while Intel doesn’t have method for building an actual cosmic ray detector yet, they’re certainly getting a jump on the problem with this patent. We know we’ll certainly be sleeping better at night.

[Via Slashdot]

MacBook Air processor situation gets explained

via Engadget by Donald Melanson on 1/18/08

We already knew the basic details about the processor at the heart of Apple’s MacBook Air, but those itching to know exactly how Apple and Intel managed to cram everything into that oh so small package may want to head over to AnandTech, which has pieced together a fairly thorough report on the matter. As the site reports, the processor is based on Intel’s 65nm Merom architecture and packs an 800MHz bus, yet it uses the significantly smaller chip package that Intel had originally only planned to debut with the launch of its Montevina laptop platform later this year. That combination, along with the Intel 965GMS chipset with integrated graphics, allowed for a 60% reduction in total footprint size, and a TDP rating of just 20W, as opposed to 35W from the regular Core 2 Duo processor. If that’s still not enough MacBook Air minutia you, you can hit up the link below for the full rundown.

[Via AppleInsider]

Intel demos iPhone-like MID of the future

Intel just keeps banging out the hits from IDF. After the handful of McCaslin “next-quarter” and “coming-soon” UMPCs we saw from the chipmaker (and associates), Intel started busting out prototypes from its forthcoming Menlow chipset, using smaller, 45nm Silverthorne CPUs, and the 2009/2010 offering Moorestown… which is the bad-boy you’re looking at in these photos is based on. In a rather obvious homage to the iPhone, the chip-kingpin presented this do-anything, go-anywhere MID (provided you can cram this French-bread-sized device into a pocket). The device will feature a 45nm CPU as well, plus all kinds of goodies like integrated WiFi and WiMAX, and apparently 24 hours of battery life on a single charge. Obviously, this product will probably never see the light of day (at least not in this form factor), but then again — you never really know.

Read — Intel shows concept iPhone running on Moorestown platform
Read — Intel’s iPhone clone, we’re not joking
Read — Intel Details Next Generation “Menlow” MID, UMPC Platform

[originating url]

Intel Launches Cheaper Intel Quad-core While AMD Still Looks Dumbfounded


In addition to their mobile Extreme CPU, Intel has also announced its 3.0GHz Core 2 Extreme processor, the 65-nm QX6850 with four cores and dual 4MB Level 2 cache. The QX6850, touted as the fastest consumer processor now available, is the flagship of their new 1,333MHz Front Side Bus CPU family, which includes the Core 2 Duo E6850, E6750 and E6550, all of them with cheaper prices than the previous generation.

Intel Core 2 Extreme QX6850
3.00GHz 1333 4MBx2 $999
Intel Core 2 Duo E6850
3.00GHz 1333 4MB $266
Intel Core 2 Duo E6750
2.66GHz 1333 4MB $183
Intel Core 2 Duo E6550
2.33GHz 1333 4MB $163


[originating url]

Intel readies massive multicore processors

Ants and beetles have exoskeletons–and chips with 60 and 80 cores are going to need them as well.

Researchers at Intel are working on ways to mask the intricate functionality of massive multicore chips to make it easier for computer makers and software developers to adapt to them, said Jerry Bautista, co-director of Intel’s Tera-scale Computing Research Program.

These multicore chips, he added, will also likely contain both x86 processing cores, similar to the brains inside the vast majority of Intel’s server and PC chips today, as well as other types of cores. A 64-core chip, for instance, might contain 42 x86 cores, 18 accelerators and four embedded graphics cores.

Some labs and companies such as ClearSpeed Technology, Azul Systems and Riken have developed chips with large numbers of cores–ClearSpeed has one with 96 cores–but the cores are capable of performing certain types of operations.

The 80-core mystery

Ever since Intel showed off its 80-core prototype processor, people have asked, “Why 80 cores?”

There’s actually nothing magical about the number, Bautista and others have said. Intel wanted to make a chip that could perform 1 trillion floating-point operations per second, known as a teraflop. Eighty cores did the trick. The chip does not contain x86 cores, the kind of cores inside Intel’s PC chips, but cores optimized for floating point (or decimal) math.

Other sources at Intel pointed out that 80 cores also allowed the company to maximize the room inside the reticle, the mask used to direct light from a lithography machine to a photo-resistant silicon wafer. Light shining through the reticle creates a pattern on the wafer, and the pattern then serves as a blueprint for the circuits of a chip. More cores, and Intel would have needed a larger reticle.

Last year, Intel showed off a prototype chip with 80 computing cores. While the semiconductor world took note of the achievement, the practical questions immediately arose: Will the company come out with a multicore chip with x86 cores? (The prototype doesn’t have them.) Will these chips run existing software and operating systems? How do you solve data traffic, heat and latency problems?

Intel’s answer essentially is, yes, and we’re working on it.

One idea, proposed in a paper released this month at the Programming Language Design and Implementation Conference in San Diego, involves cloaking all of the cores in a heterogeneous multicore chip in a metaphorical exoskeleton so that all of the cores look like a series of conventional x86 cores, or even just one big core.

“It will look like a pool of resources that the run time will use as it sees fit,” Bautista said. “It is for ease of programming.”

A paper at the International Symposium on Computer Architecture, also in San Diego, details a hardware scheduler that will split up computing jobs among various cores on a chip. With the scheduler, certain computing tasks can be completed in less time, Bautista noted. It also can prevent the emergence of “hot spots“–if a single processor core starts to get warm because it’s been performing nonstop, the scheduler can shift computing jobs to a neighbor.

Intel is also tinkering with ways to let multicore chips share caches, pools of memory embedded in processors for rapid data access. Cores on many dual- and quad-core chips on the market today share caches, but it’s a somewhat manageable problem.

“When you get to eight and 16 cores, it can get pretty complicated,” Bautista said.

The technology would prioritize operations. Early indications show that improved cache management could improve overall chip performance by 10 percent to 20 percent, according to Intel.

Like the look and feel of technology for heterogeneous chips, programmers won’t, ideally, have to understand or deliberately accommodate the cache-sharing or hardware-scheduling technologies. These operations will largely be handled by the chip itself and be obscured from view.

Heat is another issue that will need to be contained. Right now, I/O (input-output) systems need about 10 watts of power to shuttle data at 1 terabit per second. An Intel lab has developed a low-power I/O system that can transfer 5 gigabits per second at 14 milliwatts–which is less than 14 percent of the power used by current 5Gbps systems today–and 15Gbps at 75 milliwatts, according to Intel. A paper outlining the issue was released at the VLSI Circuits Symposium in Japan this month.

Low-power I/O systems will be needed for core-to-core communication as well as chip-to-chip contacts.

“Without better power efficiency, this just won’t happen,” said Randy Mooney, an Intel fellow and director of I/O research.

Intel executives have said they would like to see massive multicore chips coming out in about five years. But a lot of work remains. Right now, for instance, Intel doesn’t even have a massive multicore chip based around x86 cores, a company spokeswoman said.

The massive multicore chips from the company will likely rely on technology called Through Silicon Vias (TSVs), other executives have said. TSVs connect external memory chips to processors through thousands of microscopic wires rather than one large connection on the side. This increases bandwidth.

[originating url]