Confirmed: Apple Can Enable Dual GPU and On-the-Fly Switching in MacBook Pro

via Gizmodo by matt buchanan on 10/22/08

Nvidia dropped by today to demo some of the awesome things that the GeForce 9400M in the new MacBooks can do that Intel’s integrated graphics just can’t touch, and to discuss a few technical points. Besides confirming that you’ll see it in other notebooks soon, they definitively answered some lingering questions about the chip’s capabilities: It can support up to 8GB of RAM. It can do on-the-fly GPU switching. And it can work together with the MacBook Pro’s discrete 9600M GT. But it doesn’t do any of those things. Yet.

Since the hardware is capable of all of these things, it means that they can all be enabled by a software/firmware/driver update. Whether or not that happens is entirely up to Apple. While you can argue that Hybrid SLI—using both GPUs at once—has a limited, balls-to-the-wall utility, being able to switch between the integrated 9400M and discrete 9600M GT on the fly without logging out would obviously be enormously easier than the current setup, and allow for some more creative automatic energy preferences—discrete when plugged in, integrated on battery. Hell, you can do it in Windows on some machines.

But since it’s Apple it’s also entirely possible we’ll never see any of this to come to pass—GPU-accelerated video decoding has totally been possible with the 8600M GT in the previous-gen MacBook Pros, and well, you know where that stands. [Apple & Nvidia Coverage@Giz]

Advertisements

GeForce 9400M to hit notebooks from five major vendors, mock Intel

via Engadget by Samuel Axon on 10/22/08

Now that NVIDIA’s GeForce 9400M has made its debut in Apple’s new MacBooks, Technical Marketing Director Nick Stam says that five major notebook vendors are planning to ship systems with the chipset — though we don’t know if that includes Apple or not. Stam expects NVIDIA will carve out 30 percent of the integrated graphics market for itself, partly by improving other experiences besides games — Google Earth, photo editing, day-to-day video encoding, and other activities performed by people who use keys besides W, A, S, and D. Frankly, we’re just thankful we’ve evolved past the days when we needed a 19-inch monster to perform high-impact 3D tasks without sacrificing to the sinister gods of screen tearing.

ATI Breaks Teraflop Barrier with Radeon HD 3870 X2 GPU

via Gizmodo by Wilson Rothman on 1/28/08

ATI_Radeon_HD_3870_X2.jpg

Remember that honkin’ ATI graphics card we showed you at CES? The one that was 1,000 times as fast as a Cray-1? Well, it’s official, making its debut today as the $450 ATI Radeon HD 3870 X2. It’s the first GPU to break the teraflop barrier, and is nearly double the performance of the HD 3870 you spent all your money on back in November. Press release with technical details after jump. [Product Page]

AMD Delivers Enthusiast Performance Leadership(1) with the Introduction of the ATI Radeon(TM) HD 3870 X2

— Industry’s First Teraflop Consumer Graphics Card Redefines High-Definition Performance for 1080P Gaming and beyond —
SUNNYVALE, Calif. –(Business Wire)– Jan. 28, 2008 AMD (NYSE:AMD) today announced the immediate availability of the ATI Radeon(TM) HD 3870 X2 graphics processor, expanding the visual boundaries of PC entertainment well beyond the 1080P High Definition (HD) threshold. The industry’s first graphics processor to break the Teraflop (one trillion floating point operations per second) barrier, the ATI Radeon HD 3870 X2 nearly doubles the performance of the award-winning ATI Radeon(TM) HD 3870 introduced in November 2007.(2)

Through an elegant yet aggressive design, the 55 nanometer process-based ATI Radeon 3870 X2 combines two ATI Radeon HD 3870s on a single graphics board, connected through integrated CrossFire(TM) technology. ATI Radeon 3870 X2 is also the first performance-leadership graphics product in the world to support Microsoft’s upcoming DirectX(R) 10.1 technology. The ATI Radeon 3870 X2 delivers a new class of price and performance leadership with unbelievable enthusiast value at a suggested retail price of US $449.

This launch follows on the success of the recently released ATI Radeon(TM) HD 3400 and ATI Radeon(TM) HD 3600 series graphics products, completing a comprehensive portfolio of next-generation 55nm GPUs that deliver unparalleled price, performance and energy efficiency from entry-level to performance-leadership class products.

“PC gaming enthusiasts demand the ultimate in performance and scalability for their HD gaming experience and the ATI Radeon 3870 X2 sets the standard by which all should be compared in this segment,” said Rick Bergman, senior vice president and general manager, Graphics Product Group, AMD. “With this launch we reaffirm our commitment to enthusiast performance leadership and send a clear message that the ATI Radeon 3870 X2 is the new gold standard of the PC gaming world.”

Ultimate Performance

With the upcoming introduction of Microsoft’s DirectX 10.1 specification, gamers can expect more realistic gaming environments while developers have access to an increased amount of tools and resources to enhance overall image quality. Through delivering top-to-bottom DirectX 10.1 support, ATI Radeon HD 3000 series users can enjoy a more complete gaming experience now and in the future.

“We’re pleased to see our newest DirectX 10 technology brought to market so soon with the introduction of AMD’s latest enthusiast hardware,” said Kevin Unangst, senior global director of Games for Windows, Microsoft. “One of the greatest advantages of PC gaming is the rapid pace at which the experiences evolve and improve. ATI Radeon HD 3870 X2 delivers on the promise of DirectX 10 gaming with significantly improved visuals and enhanced performance.”

ATI Radeon HD 3870 X2 will also provide support for ATI CrossFireX(TM), the innovative next-generation AMD multi-GPU technology designed to support up to four GPUs. Software support to enable ATI CrossFireX is planned for late Q1 2008.

Ultimate HD Experiences

With the launch of ATI Radeon HD 3870 X2, AMD continues to support the industry-leading Unified Video Decoder (UVD) and ATI Avivo(TM) HD for exceptional platform efficiency and image quality for H.264 and VC-1 high definition content. Enhanced HDMI functionality is also offered via integrated HDCP and audio for HDMI video.

“Alienware prides itself on staying at the forefront of HD gaming innovation so that our brand stands for the best possible experience for our customers,” said Patrick Cooper, director of product group, Alienware. “With the launch of the ATI Radeon HD 3870 X2 in our Area-51 ALX CrossFire platform, we can push the boundaries of visual realism one step further and provide enthusiast gamers with the perfect blend of next-generation features, performance and platform efficiency.”

Ultimate Efficiency

The ATI Radeon HD 3870 X2 is the first enthusiast graphics processors to use TSMC’s 55nm process technology. The smooth transition to 55nm has allowed for a 2X increase in performance-per-watt over the previous generation. Through an elegant board design, the ATI Radeon 3870 X2 delivers exceptional acoustics that are roughly equivalent to a single ATI Radeon HD 3870. When combined with ATI PowerPlay(TM) technology, the ATI Radeon 3870 X2 delivers exceptional idle power efficiency with the ability to dynamically raise or lower GPU power depending on the usage scenario.

The ATI Radeon HD 3870 X2 launches with broad availability and ecosystem support from AMD’s Add-in-Board (AIB) and Systems Integrators (SI) partners. AIB partners building boards based on the ATI Radeon HD 3870 X2 include Asus, ASK, Club3D, Diamond Multimedia, HIS, ITC, Jetway, MSI, Sapphire, Triplex, Tul and Visiontek. Systems integrators launching ATI Radeon HD 3870 X2 series include ABS, Alienware, Canada Computers, CyberPower, Falcon-Northwest, iBUYPOWER, Maingear, Systemax and Velocity Micro.

About AMD

Advanced Micro Devices (NYSE:AMD) is a leading global provider of innovative processing solutions in the computing, graphics and consumer electronics markets. AMD is dedicated to driving open innovation, choice and industry growth by delivering superior customer-centric solutions that empower consumers and businesses worldwide. For more information, visit http://www.amd.com.

(1) Performance comparisons using ATI Radeon HD 3870 X2 versus NVIDIA 8800 Ultra using 3D Mark 2006, Supreme Commander, Call of Juarez, BioShock and Unreal Tournament 3 at 2560X1600 on AMD Phenom 2.6GHz CPU, AMD 790FX chipset, 2GB DDR2-800, Windows VISTA 64bit and ATI Catalyst display driver v. 8.45

(2) Performance comparisons of ATI Radeon HD 3870 versus ATI Radeon HD 3870 X2 using 3D Mark 2006, Supreme Commander and Unreal Tournament 3 at 2560X1600 on AMD Phenom 2.6GHz CPU, AMD 790FX chipset, 2GB DDR2-800, Windows VISTA 64bit and ATI Catalyst display driver v. 8.45

New NVIDIA 8700M GT Rendering Looks Better Than Xbox 360

8700mgt.jpg

NVIDIA has just released their new top of the line GeForce 8700M GT, just in time to remind you that no matter how cool your new MacBook Pro or Sony VAIO are, you are not the King of the Hill anymore.

Not only that: NVIDIA says that now your Xbox 360’s graphics have been officially overtaken by a notebook GPU, as you can see in the gallery. The new 8700M GT has been first appeared into the Toshiba Dynabook Satellite WXW, which just got announced in Japan.

The 8700M GT has the same 32 Stream Processors of the 8600M GT, but it has increased the frequency of the GPU to 625MHz from 472 MHz. The shader processor has also seen an increase, from 950MHz to 1,250MHz, the same as the memory bus, which now clocks at 800MHz instead of the 700MHz with a maximum 512MB on board.

This new specs push performance quite a bit, jumping from a 7.6 gigatexels per second Texture Fill Rate to reach the 10 gigatexel/s mark. All quite stunning for a mobile graphic chip, matching the performance of some of the best desktop cards last year.

toshiba_01.jpg

Other than the new graphics processor, the Toshiba Dynabook Satellite WXW is your usual top of the line Santa Rosa laptop. It comes with Core 2 Duo T7300 at 2GHz, 1,680 × 1,050 pixel screen and 120GB hard drive. It also comes with your usual ports plus HDMI out, S/PDIF digital audio and a fingerprint sensor. The NVIDIA 8700M GT, however, comes with just 256MB of RAM.

toshiba_02.jpg

Good specs, fugly design.

[originating url]

More ASUS XG Station Details Unveiled



ASUS expects to launch its XG Station next month to OEM and channel partners — no retail availability expected

Notebook users rejoice: ASUS is set to produce its XG Station external graphics card for notebooks. ASUS previously pulled the wraps off the XG Station at the Consumer Electronics Show earlier this year. The XG Station will not have retail availability, however, ASUS plans to ship the XG Station to OEMs and channel partners.

ASUS will not sell the XG Station as a barebones external graphics card enclosure. Instead, ASUS will bundle the XG Station with ASUS PCIe graphics cards. Pricing on XG Stations will vary depending on the bundled graphics card.

ASUS’ XG Station takes advantage of a notebook’s ExpressCard slot to provide a PCIe x16 slot for additional graphical processing capabilities. ASUS demonstrated the XG Station with an EN7900GS graphics card at CES 2007.

In addition to the enhanced video capabilities, the XG Station features audio output capabilities. There is a single headphone output jack on the XG Station – sorry folks, there is no 5.1 output support. However, the XG Station supports Dolby Headphone technology for simulated six-channel surround sound audio.

Audio and video capabilities aside, ASUS equips the XG Station with a large LED display to monitor vital system information. The LED display shows the following information:

  • System master volume
  • GPU clock speed
  • Current GPU temperature
  • Dolby® Headphone feature status
  • Current actual Frames Per Second (FPS) information
  • GPU fan speed Indicator

A control knob allows users easy overclocking controls too. Overclocking functionality of the control knob is limited to GPU core clock though.

Expect the ASUS to release the XG Station to eligible customers next month. Pricing information on XG Station-based graphics cards is unknown now. Expect XG Station bundles to cost slightly more than an ASUS graphics card itself.

[originating url]

Co-opting GPU for CPU Tasks Advanced by NVidia

Earlier this week, engineers at nVidia put the finishing touches on version 0.8 of its Compute Unified Device Architecture system for Windows and Red Hat Linux. CUDA’s objective is to enable C programmers to utilize the high-throughput pipelining architecture of an nVidia graphics processor – pipelines that are typically reserved for high-quality 3D rendering, but which often sit unused by everyday applications – for compute-intensive tasks that may have nothing to do with graphics.

Today, the company announced its first C compiler – part of the CUDA SDK, which will enable scientific application developers for the first time to develop stand-alone libraries that are executed by the graphics processor, through function calls placed in standard applications run on the central processor.

NVidia’s objective is to exploit an untapped reservoir on users’ desktops and notebooks. While multi-core architecture has driven parallelism in computing into the mainstream, multi-pipeline architecture should theoretically catapult it into the stratosphere. But applications today are naturally written to be executed by the CPU, so any GPU-driven parallelism that’s going to happen in programming must be evangelized first.

Which is why the company has chosen now to make its next CUDA push, a few weeks prior to the Game Developers’ Conference in San Francisco. The greatest single repository of craftspersons among developers may be in the gaming field, so even though games already occupy the greater part of the GPU’s work time, it’s here where a concept such as CUDA can attract the most interest.

“The GPU is specialized for compute-intensive, highly parallel computation – exactly what graphics rendering is about,” reads nVidia’s latest CUDA programming guide (PDF available here), “and therefore is designed such that more transistors are devoted to data processing rather than data caching and flow control.”

Huge arithmetic operations may be best suited to GPU execution, nVidia engineers believe, because they don’t require the attention of all the CPU’s built-in, microcoded functions for flow control and caching. “Because the same program is executed for each data element,” reads the CUDA v. 0.8 guide, “there is a lower requirement for sophisticated flow control; and because it is executed on many data elements and has high arithmetic intensity, the memory access latency can be hidden with calculations instead of big data caches.”

For CUDA to actually work, however, a computer must be set up with an exclusive NVidia display driver; CUDA is not an intrinsic part of ForceWare, at least not yet. In addition, programs must be explicitly written to support CUDA’s libraries and custom driver; it doesn’t enable the GPU to serve as a “supercharger” for existing applications. Because the GPU is such a different machine, there’s no way for it to take a load off the CPU’s shoulder’s directly, like an old Intel 8087 or 80186 co-processor used to do.

So an application that supports CUDA thus, by definition, supports nVidia. AMD also has its own plans for co-opting GPU power, which it made immediately clear after its acquisition of ATI.

The CUDA programming guide demonstrates how developers can re-imagine a math-intense problem as being delegated to processing elements in a 2D block, like bestowing assignments upon a regiment of soldiers lined up in formation. Blocks and threads are delegated and proportioned, the way they would normally be if they were being instructed to render and shade multi-polygon objects. Memory on-board the GPU is then allocated using C-library derivatives of common functions, such as cudaMalloc() for allocating blocks of memory with the proper dimensions, and cudaMemcpy() for transferring data into those blocks. It then demonstrates how massive calculations that would require considerable thread allocation on a CPU are handled by the GPU as matrices.

“This complete development environment,” read an nVidia statement this morning, “gives developers the tools they need to solve new problems in computation-intensive applications such as product design, data analysis, technical computing, and game physics.”

[originating url]

Hands-on with the Asus XG Station external GPU

We had to see it to believe it, but the Asus XG Station does indeed turn that wimpy laptop of yours into a somewhat capable gaming rig. The cats at ASUS set us up with a head-to-head demo of two, ‘zactly spec’d laptops with awesomely weak Intel GMA 945 graphics processors — just one of the two was hooked up to an Asus XG Station via ExpressCard. After attaching the external monitor to one of the XG’s two DVI connectors, Asus let the gaming demo fly. The stock laptop struggled to keep up with the action with noticeably huge jumps in frames making intense game play a non-starter. The XG-equipped laptop, however, hummed along quite happily. Game play was smooth though a few frames were occasionally dropped. Surround sound is in fact simulated, which is both good and bad: the good is you get pseudo 5.1 surround from any headphones; the bad is you get pseudo 5.1 surround from any headphones. Verdict: the XG Station is ready to game as long as you don’t expect it to perform like a dedicated gaming rig. But if you’re the occasional gamer who like his lappie ultra-portable and under powered for gaming as a result, then this might be the solution for you. Lots of pics after the break.