Akitio Node Cabinet review: Real, affordable graphics for your laptop

 

The Akitio Node external GPU cabinet is here to give your Thunderbolt 3-equipped laptop a big boost. This affordable unit—basically, a big steel box with a 400-watt PSU and a fan in front—lets you drop in most modern AMD or Nvidia graphics cards and then connect it to a laptop using PCIe over Thunderbolt 3/USB-C.

dsc04540

Gordon Mah Ung

The Akitio features one fan for the PSU, and one in front that offers plenty of airflow.

For the most part, when it works, it’s amazingly smooth. For example, we cracked open the Node, dropped in a Founders Edition GeForce GTX 1080 Ti card, then plugged it into a HP Spectre x360 13t. Once we had the latest drivers installed from Nvidia’s website, we were off and running. As these results from 3DMark FireStrike Ultra show, the tiny HP Ultrabook gives what-for to big, giant, fast gaming laptops.

spectre x360 egpu 1080 ti firestrike ultra overall

IDG

Yes, a sub-3lbs. laptop can hang with big fat gaming laptops–if you cheat like we did.

The score you see above, however, is the overall score for 3DMark FireStrike Ultra, which also counts CPU performance. The dual-core Kaby Lake chip in the tiny HP Spectre x360 13T isn’t going to compete with the quad-cores. In the 3DMark test that includes just the graphics performance, however, you’ll see a better spread from the GTX 1080 in the giant EON17-X laptop.

Yes, there’s a good chance the limited x4 PCIe Gen 3 could rob you of some performance over what you might get if the GPU were in a desktop. In fact, the same GPU will typically score in the 7,000 range when in a full x16 PCIe Gen 3 slot. But just remember: The alternative is being stuck with the integrated graphics in the laptop, unable to game at this higher level of performance.

spectre x360 egpu 1080 ti firestrike ultra graphics

 

The best laptops of 2017: Ultrabooks, budget PCs, 2-in-1s, and more

 

Choosing the best laptop can be difficult these days. With companies like Dell, HP, Acer, and Asus continually launching updates of popular notebooks and expansions of product lines, we’re all but swimming in options right now.

Summer has pushed even more convertibles, 2-in-1s, and traditional notebooks onto store shelves. The most interesting ones poke holes in existing assumptions about certain categories. Microsoft’s Surface Laptop, for example, is an attempt to revive the company’s battle with Chromebooks, while Dell’s Inspiron 15 7000 Gaming—our “Best budget gaming laptop” pick—offers 1080p gaming for just $850. Vendors also are serious about squeezing AMD’s new CPUs into their lineups, with Asus recently debuting the first Ryzen laptop at Computex.laptops

Given the number of choices out there, we’re hard at work evaluating more laptops. For our latest update, we’ve added “Best MacBook” as a category, in order to better help you compare the full range of laptops available.

Dell might be sticking to the adage of “If it ain’t broke, don’t fix it” when it comes to the XPS 13, but that strategy keeps producing the best ultrabook of the bunch. The Kaby Lake XPS 13 shares the same design as its predecessors: a quality aluminium exterior and carbon-fiber top, and that wonderfully compact, bezel-free 13-inch screen.

Dell actually released two updates to the XPS 13 in 2016: The one at the start of the year swapped in a Skylake CPU, added a USB Type-C port that served as an alternative charging port, and offered upgraded storage options. The most recent refresh—and our new pick for Best Ultrabook—keeps the same chassis changes as the Skylake XPS 13, features a jump to Intel’s new Kaby Lake processor, and sports a slightly larger battery. You get improved performance across the board, with a nice bump of an extra half-hour of battery life during video playback.

Kaby Lake Dell XPS 13Gordon Mah Ung
The Kaby Lake version of the Dell XPS 13 maintains that balance between portability, compact size, and performance that we like so much.

Our only lingering complaint is the small keyboard, but overall, you can’t lose with the newest XPS 13. It’s a truly compact ultrabook that punches out of its class.

 

HP Spectre x2 review: It beats the Surface Pro on value, if not performance

 

Our review of HP’s Spectre x2 12.3-inch 2-in-1 tablet begins with a simple question: Can HP continue its tradition of being an elegant, yet durable alternative to Microsoft’s Surface Pro flagship?

The answer is Yes. HP took the best bits from its Elite x2 tablet and the first-generation Spectre x2 tablet (2015), then updated the new Spectre x2 with the latest Kaby Lake chips. The Spectre x2 gives you more features for the money than the Surface Pro: Our $1,300 review unit included both the keyboard and the stylus right in the box (hear that, Microsoft?). It’s a shame this solid value is let down by middling battery life and a pesky fan.

 HP Spectre x2 2017man / ID

TABLE OF CONTENTS

  • Specs: Kaby Lake and an outstanding display
  • Kickstand, pen loop anchor the productivity
  • Extra software
  • Performance: Marred by mediocre battery life
  • Conclusion: Good value despite a few flaws

Specs: Kaby Lake and an outstanding display

HP will offer one $1,300 retail version of the Spectre x2 (the one we tested):

  • Model name: Spectre x2 12-c012dx
  • CPU: Core i7-7560U
  • RAM: 8GB  LPDDR-1600
  • SSD: 360GB PCIe NVMe

Four more SKUs will be available via HP.com:

An entry-level Core i5 version for $1,150:

    • Model name: Spectre x2 12t
  • CPU: Core i5-7260U
  • RAM: 8GB LPDDR-1600
  • SSD: 128GB PCIe NVMe

An entry-level Core i7 version for $1,230:

    • Model name: Spectre x2 12-c052nr
  • CPU: Core i7-7560U
  • RAM: 8GB LPDDR-1600
  • SSD: 256GB PCIe NVMe

Two higher-end Core i7 versions have these starting configurations and can be upgraded. This one starts at $1,670:

  • CPU: Core i7-7560U
  • RAM: 16GB  LPDDR-1600
  • SSD: 512GB PCIe NVMe

The highest-end one starts at $1,970:

  • CPU: Core i7-7560U
  • RAM: 16GB  LPDDR-1600
  • SSD: 1TB PCIe NVMe

 

Meet the huge 4K curved monitors that cost far less than you’d think

Do you fancy a big monitor? And we mean a really big monitor, because JapanNext has unleashed a pair of monster curved 4K displays aimed (partly) at being hooked up to PCs for productivity use – and they don’t cost the earth, either.

The JN-VC490UHD and JN-VC550UHD are 49-inch and 55-inch monitors respectively, which are being pitched from multiple angles: living room use, gamers, as well as productivity. If you need a copious amount of screen real-estate for multi-tasking and professional/industrial use, then obviously this is a neater alternative to having multiple monitors.

As mentioned, gamers might also be tempted by these curved 4K screens, as they’re obviously going to give you a pretty breath-taking experience when playing the latest shooter (or whatever’s your gaming poison). Particularly given that these displays have AMD’s FreeSync on board to combat stuttering and tearing.

The monitors offer a resolution of 3840 x 2160 with a 60Hz refresh rate and a 3ms response time (4ms for the larger screen – again, that’s still pretty tidy for gaming). These are 10-bit SVA panels with 99% coverage of the sRGB colour gamut.

Easy on the eyes

As Ars Technica reports, the monitors use ELED backlighting which means they are flicker-free to be easy on the eyes when used for long periods of work.

You get full picture-in-picture and picture-by-picture support, and connectivity includes a DisplayPort 1.2, HDMI 2.0, and a pair of HDMI 1.4 ports, alongside a legacy VGA connector. In total, you could hook up four PCs to the display with a Full HD picture for each.

There’s also a built-in USB hub and integrated 6W stereo speakers.

Both these monitors are currently only available in Japan, and as mentioned, they don’t carry wallet-destroying prices. The larger 55-inch screen is priced at around $870 (around £700, AU$1,135), and the 49-inch display runs to $725 (around £585, AU$945).

JapanNext does sell products in Europe, so fingers crossed we’ll see a launch of these giant screens over here.

ARM pronounces new Artemis CPU center, first 10nm test chip, constructed at TSMC

ARM and TSMC have had a joint agreement in vicinity for several years to collaborate on R&D work and early validation on process nodes, and that they’ve introduced a major milestone in that method. As of the day before today, ARM is saying that it has efficiently established a brand new 10nm FinFET design at TSMC.

TSMC-Fab

The unnamed multi-core test chip features a quad-middle CPU from ARM, codenamed Artemis, a single-middle GPU as a proof of concept, and the chip’s interconnect and other various functions.

Artemis-TestChip

This isn’t an SoC that ARM will ever convey to marketplace. alternatively, it’s reason is to feature as a validation tool and early reference layout that allows both TSMC and ARM recognize the specifics of the 10nm FinFET system as it movements in the direction of industrial viability. one of the functions that natural-play foundries like TSMC offer their customers are equipment and libraries particularly designed to in shape the talents of each process node. on account that each new node has its personal design regulations and best practices, TSMC has to music its offerings as a result — and running with ARM to create a reasonably complex test chip is a win/win situation for each companies. ARM receives early insight into how first-rate to song upcoming Cortex processors; TSMC gets a preferred structure and SoC design that carefully corresponds to the actual chips it’ll be constructing for its customers as the new method node moves into production.

ARM-10nm-vs-16nm

The slide above suggests the profits TSMC expects to realise from moving to 10nm in place of its contemporary 16nm process. To the nice of our information, TSMC’s 10nm is a hybrid technique, however it’s now not clear precisely what that hybrid seems like. Our modern know-how is that the imminent 10nm node might combine a 10nm FEOL (the front cease-of-line) with a 14nm BEOL (returned-stop-of-line, which governs die length). EETimes, but, reported in March that TSMC’s 10nm shrink could keep a 20nm minimum feature size, even as its 7nm might supply a 14nm minimal feature length (10/20 and seven/14, respectively). both manner, Intel is the best company that has announced a “authentic” 14nm or 10nm die cut back. (The diploma to which this method benefit materially allows Intel in recent times is open to discuss).

ARM-10nm-vs-16nm

things to note: First, the top line of the slide is doubtlessly complicated. The zero.7x reduction of strength would be less difficult to read if ARM had labeled it “ISO performance at 0.7x power.” second, the performance gains expected right here in simple terms as a result of the node transition are downright anemic. I don’t want to read too much into those graphs because it’s very early days for 10nm, but there’s been plenty of speak around 16/14nm as a protracted-lived node, and results like this are a part of why — only a handful of corporations will want to pay the extra charges for the additional mask required as a part of the die cut back. TSMC has already stated that it believes 10nm might be a pretty quick-lived node, and that it thinks it’ll have extra extensive client engagement for 7nm.

None of because of this ARM can’t supply compelling enhancements at 10nm — however the constrained quantity of lithography upgrades imply a heavier lift for the CPU research groups and layout group of workers, who want to find additional hints they are able to use to squeeze extra overall performance out of silicon without riding up strength consumption.

As for whilst 10nm might ship, past timelines endorse it’ll be some time but. TSMC has stated it expects early 10nm tapeouts to pressure vast call for starting in Q2 2017. at the same time as that’s a quick flip-round for a organization whose 16nm most effective entered volume production in August 2015, the speed can be explained if the 10nm node keeps to leverage TSMC’s existing 20nm era. undergo in mind that there’s a big postpone among whilst TSMC generally ships hardware and whilst consumer products release, particularly in mobile gadgets wherein multiple groups carry out complicated verification strategies on multiple parts of the chip.

either way, this tapeout is a great breakthrough for both ARM and TSMC, and 10nm will deliver improvements over the 16nm tech available today.

The Razer Core can boost any Thunderbolt 3-equipped laptop with an external GPU

Razer Core

There’s long been an argument about gaming laptops and obsolescence. You plunk down $2,000 on a high-end laptop and in seemingly no time it’s borderline useless. No way to upgrade. No real tweaks you can try. It’s a $2,000 paperweight.

Perhaps spurred by these complaints, Alienware eventually said “Okay, what if you could hook up a desktop graphics card to your laptop? And then upgrade it whenever you want?” And thus the Alienware Graphics Amplifier was born. MSI (sort of) followed suit.Asus too.

But there’s been one nagging issue in this hot-button field: proprietary claim-staking. Each of the above solutions works within its own ecosystem. Alienware uses a custom PCI Express connection that limits it to Alienware laptops. MSI’s was designed to work withone laptop. Asus’s is—you guessed it—proprietary to Asus. That’s left some people to rig up an external GPU through the ExpressCard port, a versatile but clunky solution.

Back in June Intel said Thunderbolt 3 might be a good candidate for external GPU docks, given its 40 Gbps throughput. It’s not PCIe levels of data transfer, but it’s better than nothing. Also, it’s not proprietary to any one computer manufacturer.

Leading the charge? Razer.

Razer Core

Razer’s new GPU dock is dubbed the Razer Core. Designed to work with Razer’s new GPU-less laptop, the Razer Blade Stealth, it will in fact work with any laptop that packs a Thunderbolt 3 port. Which at the moment means not many laptops, although that number is steadily increasing.

Razer Core

The Core is an aluminum-housed dock that slides open to accommodate “virtually every popular desktop graphics card from both AMD and Nvidia,” according to Razer’s release (double-wide, full-length PCI-Express x16 cards, drawing up to 375W of power). Cards are held in place by a screw, and then the whole enclosure slides back together. Both power and data are supplied by the single Thunderbolt 3 connection. Plug it in and you’re ready to play—no reboot required.

The Core itself measures approximately 8.5-by-4.1-by-13.5 inches (218-by-105-by- 340mm), with two-zone Chroma lighting, four USB 3.0 ports, and Gigabit Ethernet.

The question now is performance. In theory, the Razer Core sounds amazing for those who need to own a laptop but want to do some hefty gaming at home: self-contained, no need for external power, relatively small, will connect to any Thunderbolt 3 computer so you don’t need to buy from the same manufacturer for the rest of your life, and more attractive than a naked graphics card sitting on your desk.

But the question remains, what kind of boost will you see when hooked in through Thunderbolt instead of PCIe? As I said, 40 Gbps is fantastic when compared to past ports (USB 3.1’s transfer speed is 10 Gbps for instance), but it’s nowhere near what you’d get plugging the same card directly into a PCIe slot on your motherboard. We’ll need to get our hands on the Razer Core and run a bunch of tests before we can recommend it.

Razer Core with Razer Blade Stealth

Still, it’s an important development—potentially the redemption that gaming laptops sorely need. While I use a gaming laptop because my job requires one, it’s always been tough to recommend something so expensive that so quickly loses value. With the Razer Core maybe you can get a few more years out of your aging hardware—or buy a cheaper laptop and supplement it with an old card you have lying around. The latter is clearly what Razer expects.

MSI’s lavish, yet functional SLI bridges for Nvidia GPUs are so ludicrous I adore them

msi sli bridge with fan

Here, we like to preach a smart, practical approach to PC hardware. That hot new gear? You probably don’t need it. But there’s no denying the gut appeal of amped-up, gloriously excessive hardware that cranks things to 11 just for the hell of it. It’s why we stuffed 128GB of DDR4 RAM into a PC. It’s why mad tinkerers did, well, this in the quest for lower temps. And it’s why I fell head over heels in love with MSI’s ostentatious new 3-way and 4-way SLI bridges the second I laid eyes on them.

SLI bridges are the little connectors that allow you to use multiple GeForce graphics cards simultaneously in a single PC. (AMD uses a similar technology dubbed CrossFire, but recent Radeon models have ditched the connectors and allow GPUs to communicate via your motherboard directly.) Since most people use single-GPU configurations, most folks may know them as “one of those extra bits that came packaged with swanky gaming motherboards.” But there’s actually a market for fancier aftermarket SLI bridges constructed from more premium parts, to better suit your flashy multi-GPU gaming setup. Asus ROG and EVGA both offer premium SLI bridges for example.

Enter MSI’s new models.

These bad boys aren’t just “forged out of high-quality materials.” No, that’s not nearly enough. They’re also emblazoned with MSI’s dragon-y GAMING logo, and it’s LED lit, too! Now we’re getting somewhere—but that’s still not excessive enough. The icing on this deliciously over-the-top cake is its fan.

MSI SLI bridge w fan in system

Yes, these tricked-out SLI bridges come bundled with their own “silent” Cooler Master fan accessory that can be attached directly to the bridge to keep air flowing better than ever between your broiling graphics cards. It even offers rubber standoffs at the end of the fan to accommodate graphics cards and custom coolers of varying sizes, and you can shift the fan up or down along the length of the bridge to focus airflow on the top three cards, or the bottom three.

That’s ridonkulous. That’s the definition of “doing it because you can.” As far as I can tell with a quick Google search, the fan’s a first for a fancy SLI bridge. And did I mention the competition for this is something that possibly shipped for free with your motherboard?

MSI’s shamelessly catering to an exclusive, enthusiast niche with these SLI bridges. I love it.

We’ll have to see if the infatuation lingers after MSI comes clean about the price of these SLI bridges, though. A few lights and a superfluous fan may win over my heart, but I’m not crazy.

First chip that uses light for data transfer developed

The new chip has a bandwidth density of 300 gigabits per second per square millimetre, about 10 to 50 times greater than current packaged electrical-only microprocessors.

WASHINGTON: A microprocessor chip that uses light, rather than electricity, to transfer data at rapid speeds while consuming minute amounts of energy has been developed by researchers, including those of Indian-origin.

The new technology could pave the way for faster, more powerful computing systems and network infrastructure.

“Light based integrated circuits could lead to radical changes in computing and network chip architecture in applications ranging from smartphones to supercomputers to large data centres, something computer architects have already begun work on in anticipation of the arrival of this technology,” said Milos Popovic, assistant professor at the University of Colorado Boulder in US.

Traditional microprocessor chips – found in everything from laptops to supercomputers – use electrical circuits to communicate with one another and transfer information.

In recent years, however, the sheer amount of electricity needed to power the ever-increasing speed and volume of these data transfers has proven to be a limiting factor.

To overcome this obstacle, the researchers including Rajesh Kumar also from CU-Boulder turned to photonics, or light-based, technology.

Sending information using light rather than electricity reduces a microchip’s energy burden because light can be sent across longer distances using the same amount of power.

“One advantage of light based communication is that multiple parallel data streams encoded on different colours of light can be sent over one and the same medium in this case, an optical wire waveguide on a chip, or an off-chip optical fibre of the same kind that as those that form the Internet backbone,” said Popovic, whose team developed the technology in collaboration with a team led by Rajeev Ram, a professor at Massachusetts Institute of Technology (MIT).

“Another advantage is that the infrared light that we use and that also TV remotes use has a physical wavelength shorter than 1 micron, about one hundredth of the thickness of a human hair,” said Popovic.

“This enables very dense packing of light communication ports on a chip, enabling huge total bandwidth,” he said.

The new chip has a bandwidth density of 300 gigabits per second per square millimetre, about 10 to 50 times greater than current packaged electrical-only microprocessors.

Measuring just 3 millimetres by 6 millimetres, the chip bridges the gap between current high-speed electronics manufacturing and the needs of next-generation computing for chips with large-scale integrated light circuits.

It retains state-of-the-art traditional electronic circuitry while incorporating 850 optical input/output (I/O) components in order to create the first integrated, single-chip design of its kind.

“This is a milestone. It’s the first processor that can use light to communicate with the external world,” said Vladimir Stojanovic, associate professor at the University of California, Berkeley.

The study was published in the journal Nature.

New Yorkers get promised gigabit Wi-Fi access

 
It is widely anticipated that one or two weeks of testing of the Wi-Fi service would take place before New Yorkers will be able to use the hubs to get online.

NEW YORK: LinkNYC, an announced network to cover New York City with free Wi-Fi service have begun installing the first access points in the city.

The plan was announced by the mayor’s office on November 17, 2014. It aims to bring free, encrypted, gigabit wireless internet coverage to the five boroughs by converting old payphones into hotspot points.

According to The Verge, 500 other hubs are set to be installed throughout the city by mid-July.

It is widely anticipated that one or two weeks of testing would take place before New Yorkers will be able to use the hubs to get online.

The full network will install more than 7,500 public hubs in the entire New York, and will replace pre-existing phone booths.

New Salesforce tool lets users have their way with external data

Salesforce Connect

Salesforce users have been able to access data in external apps ever since the release of the cloud vendor’s Lightning Connect tool last year, but a new generation of the product announced on Thursday extends those ties considerably.

Dubbed Salesforce Connect, the new offering lets employees not only access external data but also proactively manage it.

Read/write capabilities in the new tool mean that Salesforce users can now create, read, update and delete records in various external sources in real time, right from within Salesforce. That puts data from order-management, receivables or inventory-management systems, for instance, within much closer reach.

Custom adapters, meanwhile, make it possible to connect to APIs (application programming interfaces) in formats other than the OData format Connect uses. The result is that developers can connect Salesforce to any Web API or any of the more than 10,000 public APIs available on the Internet.

“Until now, you were limited to integrating with OData Web services, forcing you to build or buy OData producers hosted outside of Salesforce,” explained software engineer Lawrence McAlpin in a post announcing the new feature on the Salesforce Developers Blog earlier this year. “Now, with the Apex Connector Framework, you can integrate with anything — well, anything that we can get to from Salesforce by using HTTP callouts.”

Finally, companies with multiple Salesforce “orgs,” or instances — across regions, subsidiaries or functions, for example — can now connect them without having to write any code, Salesforce said.

Salesforce has long touted its “API-first” approach. More than half of the 4 billion-plus transactions it processes every day come in through the Salesforce API, it says.