There was an error in this gadget

Tuesday, December 18, 2012

Apple targets Wi-Fi trouble with EFI firmware updates for 2012 Macs


The latest round of Apple EFI firmware updates tackles Thunderbolt, sleep, and Wi-Fi issues.

Apple has released three EFI firmware updates for some of its Mac systems that were released in 2012, which tackle a number of issues pertaining to sleep, Thunderbolt performance, and -- more relevantly to many users -- reliability of Wi-Fi connectivity.

The first update is a Wi-Fi update for all late 2012 Mac systems that improves compatibility with 5GHz-band Wi-Fi signals.

The update includes a new version of the AirPortBrcm4311.kext kernel extension. This update is specific for those running OS X 10.8.2 build 12C2034 (you can look this up by clicking the version number of OS X in the About This Mac dialog box in the Apple menu.

The second update is for 13-inch MacBook Pro models, and it's supposed to improve sleep performance and Thunderbolt router support, and addresses a problem with HDMI display output. This update also includes the 5GHz Wi-Fi band compatibility fix. The last update contains similar fixes, but for iMac systems.

The updates should be available through Apple's Software Update service, but can also be downloaded from the following locations:


As with any firmware updates, be sure to have your system attached to a reliable power supply and follow the onscreen instructions for installing. Do not interrupt the installation, even if it takes a few minutes to restart and install, and be sure to fully back up your system before applying this or any other update to your system.

Tuesday, December 11, 2012

Top 5 internal drives of 2012: Your system deserves a worthy upgrade


No matter how big or small your computer is, there is at least one internal drive to host its operating system and programs. This drive is almost always a standard internal drive, which comes in the form of a regular hard drive (HDD) or a solid-state drive (SSD). The former is affordable and offers lots of storage space, while the latter is generally more expensive but superfast. The good news is, 2012 marked the time SSDs became more and more popular, thanks to the reduced costs and the increased number of vendors joining the storage market of this type.

That said, if you have a computer that uses a hard drive as the main internal storage unit, you definitely want to replace it with an SSD. This is the single upgrade that will bring a totally new life to your computer.

If you have a desktop, however, it's better to use an SSD as the main drive that hosts the operating system and a fast hard drive as a secondary drive to host data. This way you have the best of both performance and storage space.

If you want to find out more about digital storage, don't forget to check out my series on the basics. Those who want to have a quick pick for the holidays, among all internal drives I reviewed during 2012, here are the best five of them. Any of these drives will serve your system well, so the list is sorted based on the review order.

OCZ Vector
The OCZ Vector is the latest drive from OCZ and is the first drive made entirely by OCZ itself, from the controller to the flash memory. The result is something quite impressive. In my testing, it's arguably the fastest consumer-grade SSD to date. Coming in the ultrathin (7mm) 2.5-inch design and shipped with a 3.5-inch drive bay converter, the Vector works with all standard systems, from desktops to ultrabooks.

In my opinion, it's best used with a desktop, however, since it's not a drive with the best energy efficiency. For that you want to check out the Samsung 840 Pro below.

Samsung 840 Pro
The Samsung 840 Pro is an upgrade of the already-excellent Samsung 830. The new drive shares exactly the same design as its predecessor, coming in the 7mm-thin 2.5-inch design. On the inside, however, it uses a new controller and toggle-mode NAND flash memory to offer a much better combination of performance and energy efficiency. In fact it's for now the most energy-efficient on the market, with just .068W(working)/.042W (idle) consumption rating. For this reason, the new Samsung is best suited for laptops or ultrabooks.

Corsair Neutron GTX
The Corsair Neutron GTX is the first SSD from Corsair that I've worked with. Despite sharing the increasingly popular 7mm, 2.5-inch design, the drive's quite different from the rest of SSDs since it uses a new controller called LAMD LM87800 and a high-performance toggle-mode NAND from Toshiba. This resulted in one of the best performances I've seen. The good news is that getting it won't break the bank. The new Corsair drive is priced at around $1 per gigabyte, and the price is expected to get lower soon.

Plextor M5 Pro
The Plextor M5 Pro is one of the fastest SSDs on the market and the first SSD from Plextor that supports the new 7mm, 2.5-inch design. Similar to the Corsair drives above, it also comes with a new controller, the Marvell 88SS9187 Monet, that provides enterprise-grade double data protection. The drive also comes with friendly pricing, costing less than $1 per gigabyte.

WD VelociRaptor WD1000DHTZ
The WD VelociRaptor WD1000DHTZ is the only standard hard drive on this list and it makes it here because it's one of a kind.

Unlike the rest of the consumer-grade hard drives on the market, the VelociRaptor spins at 10,000rpm (as opposed to 7,200rpm in other high-speed hard drive) and offered very fast performance in my testing. While the drive can't compare with SSDs in terms of boot/shutdown times, it offers comparable data copy rates, even faster when you use two units in a RAID 0. This means the drive makes an excellent secondary hard drive for your high-end system where the main drive, which hosts the operating system, is an SSD. That said, you can also use it as a main drive in a desktop and expect quite an improvement compared with other hard drives.

Those who do a lot of data manipulation work, such as movie editing, would benefit a lot from the VelociRaptor since, like all hard drives, it doesn't suffer from the limited program/erase cycles found in all SSDs.

Source: http://cnet.co/YUKWpC

Is Time Machine really backing up your drives?


Time Machine should by default back up all internal hard drives on a Mac; however, some people are experiencing a problem in which drives appear to be automatically excluded from being backed up.

A new report is out that suggests a fault may exist in Apple's Time Machine service, causing internal drives to be automatically and silently added to Time Machine's exclusion list, resulting in the service not backing up the data on these drives and not notifying the user of the change.

As outlined on Diglloyd's Mac Performance Guide blog, this problem appears to be situational to setups where many internal drives are being managed. To see this happen, be sure one of your internal drives is mounted and available, and then add it to the Time Machine exclusion list and restart your computer. After the system boots, reopen the Time Machine exclusion list and then unmount the drive you previously excluded.

When performing these steps, those experiencing this issue will see the drive disappear from the exclusion list as expected but then be replaced by another mounted drive in the system that is subsequently excluded from backups. This unintended addition to the exclusion list will result in this new drive's data not being backed up.

It is important to not confuse this behavior with Time Machine's default behavior for handing external USB and FireWire drives. Time Machine is built to exclude external drives by default, so only the Mac's internal drives are backed up and not every USB thumbdrive you use is copied to your backups.

If indeed a bug, this issue appears to only be a problem for those with rather elaborate drive setups, with heavily partitioned multiple internal drives. In most cases Mac users have one or perhaps two internal drives on their systems that are handled by Time Machine quite well.

Additionally, this bug may be more specific to individual setups instead of being a problem experienced by all Mac users. While some readers have confirmed seeing the problem that Diglloyd outlined, others with similar setups have not seen this specific problem. Though Diglloyd claims it has been around since OS X Lion, this inconsistency makes it difficult to find a single cause of the problem. However, one possibility may lie in how Time Machine identifies drives to include or exclude.

When handling individual files and folders Time Machine will exclude by file path; however, when managing volumes it does so by using their UUIDs, which may be the source of the problem for those experiencing this bug. The UUID for a volume ought to be a unique number that is generated when it is formatted, and is based on the drive's properties, but if for some reason the UUID is blank (all zeros) or otherwise matches that of another drive (after cloning one volume to another), then while the system may still use it, services that rely on it for identifying the drive may have problems. It is possible that Time Machine could confuse two similar UUIDs in this manner, especially if multiple utilities and operating systems have been used to manage partitions on a system's internal drives.

To check the UUIDs of the volumes on the system, open Disk Utility followed by getting information on each mounted volume. In the window that appears you will find data and statistics on the drive, with one entry being the UUID. Compare these between your various volumes to make sure they are unique.

Additionally, this problem may be rooted in corruption in the Time Machine preferences file, which holds all of the volume configuration information for the service. Corrupt preferences is a common reason why programs and services stop working properly, and removing the preferences so they will be rebuilt from scratch is an easy and recommended remedy. To do this for Time Machine, first go to the Time Machine system preferences and make a note of the backup drives used and the list of excluded files. Screenshots are an easy way to do this.

Then open the /Macintosh HD/Library/Preferences/ folder and remove the file "com.apple.TimeMachine.plist" and restart your computer. This will cause the Time Machine service to relaunch and recreate its default preferences file. After doing this you can refer to your notes on your previous Time Machine configuration and add the destination drives and exclusion list items again accordingly.

Even if not everyone is affected by this bug, it does serve to remind us that there may be odd quirks with any backup system, so it is always best to regularly check your backup routines, and consider using multiple approaches to your backups (for example, drive cloning in addition to Time Machine). In addition to making sure your backup services are set up correctly, be sure to check the destination drive or drives themselves to make sure they are not experiencing any errors, by using Disk Utility to run a format and partition table verification.

These options are especially key as your storage setups expand and get more complex. Often people start with a single drive and then slowly add more storage and migrate their data to larger setups (both internal and external), and over time can build quite elaborate drive setups. As this happens, making sure the data gets properly managed with whatever backup approaches are being used becomes more important.

Tuesday, December 4, 2012

Flash memory made immortal by fiery heat


Macronix's 'Thermal annealing' process extends SSD life from 10k to 100m read/write cycles

Taiwanese flash memory non-volatile memory manufacturer Macronix is set to reveal technologies is says will make it possible for flash memory to survive 100 million read/write cycles.

Today's flash memory wears out over time, because moving electrons in and out of transistors wears them down. That problem has led manufacturers to implement “wear levelling” algorithms that ensure data is not always written to the same regions of a disk. By sharing the work around, wear levelling helps flash disks to age gracefully, instead of frequently-used regions dying young.

The likes of Intel and Samsung have, of late, been keen to point out that their wear levelling techniques are now so advanced that disks they make will likely survive all-but the most extended working lives.

But Macronix's innovations, which it will reveal at the 2012 International Electron Devices Meeting in San Francisco on December 11th, look like making wear levelling irrelevant.

Macronix has signed up to deliver a talk titled “9.1 Radically Extending the Cycling Endurance of Flash Memory (to > 100M Cycles) by Using Built-in Thermal Annealing to Self-heal the Stress-Induced Damage” at the event. The summary for the talk explains that “Flash memory endurance is limited by the tunnel oxide degradation after repeated P/E stressing in strong electric field.”

“Thermal annealing should be able to repair the oxide damage,” the summary continues, “but such theory cannot be tested in real time since completed device cannot endure high temperature > 400°C and long baking time is impractical for real time operation.”

Macronix's breakthrough is described as “A novel self-healing Flash, where a locally high temperature (>800°C), short time (ms) annealing is generated by a built-in heater.” This apparatus performs so well that the company says its tests reached 100 million read/write cycles without any signs of trouble ... and that longer life looks possible but they ran out of time to run more tests.

A happy by-product of the process is faster erase times, giving flash yet another speed advantage over spinning rust.

One of the researchers on the project,  Hang‑Ting Lue, has told IEEE Spectrum that the amount of energy required to heat flash in this way is sufficiently small that it will be suitable for use in mobile devices. He's also said the technique has the Macronix intends to commercialise the technology, but of course it's too early to suggest a timeframe for its arrival in mass-market products.

Source: http://bit.ly/VeJAOw

MySQL gains new batch of vulns


Overruns, privileges, DoS and more

A series of posts on ExploitDB by an author signing as “King Cope” reveal a new set of MySQL vulnerabilities – along with one issue that could just be a configuration issue.

The vulnerabilities, which emerged on Saturday, include a denial-of-service demonstration, a Windows remote root attack, two overrun attacks that work on Linux, and one privilege escalation attack, also on Linux.

The overflow bugs crash the MySQL daemon, allowing the attacker to then execute commands with the same privileges as the user running MySQL. “King Cope” also demonstrated a user enumeration vulnerability.

The privilege escalation vulnerability, in which an attacker could escalate themselves to the same file permissions as the MySQL administrative user, has provoked some to-and-fro on the Full Disclosure mailing list, with one writer stating that “CVE-2012-5613 is not a bug, but a result of a misconfiguration, much like an anonymous ftp upload access to the $HOME of the ftp user.”

Red Hat has assigned CVEs to the vulnerabilities, but at the time of writing, Oracle has not commented on the issues.

Source: http://bit.ly/Vs7trM

AMD finishes its 'Piledriver' Opteron server chip rollout


Looking ahead to 'Steamroller' and 'Excavator', presumably

AMD has had a wrenching couple of years, and its executives are wrestling with so many transitions in the processor market and inside AMD that they are just punting out the new Opteron 4300 and 3300 CPUs for entry servers without making a fuss with the press or analyst communities.

No briefings, no fuss, no muss, here's the press release and the die shot, and we'll see ya next year.

It's an unconventional approach, but El Reg will get you the information about the new chips and do a little analysis, just the same.

The new Opteron 4300 and 3300 processors are based on the same "Piledriver" cores that are the guts of the "Trinity" Fusion A Series APUs announced in May, the FX Series of enthusiast APUs that came out in October, and the "Abu Dhabi" Opteron 6300 high-end server chips that launched in November.

The original plan before Rory Read took over as CEO was to push the Opteron 4300 processor up to ten cores with a chip known as "Sepang" and fitting in a new processor socket that was a successor to the C32 socket used with the Opteron 4000s, 4100s, and 4200s. An Opteron 6300 was then going to be two of these Sepang chips jammed into a single ceramic package, side by side, for a total of 20 cores in a single socket.

The problem was that the Opteron 6300 socket was not going to be compatible with the G34 socket. With so few server makers standing behind the Opterons (relative to their peak enthusiasm five or six years ago), a socket change would have been bad for AMD – particularly coming at a time when GlobalFoundries was having trouble ramping its 32 nanometer processes.

And so in November 2011, AMD scrapped that plan and decided instead to just focus on making the Piledriver cores do more work and get modest clock speed increases. In February of this year, AMD publicly copped to this changed plan, giving us the eight-core "Seoul" Opteron 4300 and the "Delhi" Opteron 3300, as well as the aforementioned Opteron 6300 that is already out there.

The Piledriver cores have four new instructions and a bunch of tweaks to goose the performance of the dual-core module compared to the first-generation "Bulldozer" module. The new instructions include FMA3 (floating point fused multiply add), BMI (bit manipulation instruction), TBM (trailing bit manipulation), and F16c (for half-precision 16-bit floating point math).

As we discussed at length with the Opteron 6300 launch, the branch predictors, schedulers, load/store units, and data prefetchers have all been tweaked to run better, and the memory controller had its top memory speed goosed from 1.6GHz to 1.87GHz.

Add up all of the changes with the Piledriver cores, you get 7 to 8 per cent improvement in instructions per cycle, plus slightly faster memory and slightly higher clock speeds.

The Opteron 4300 has eight cores on a die, and depending on the model, it has either six or eight of those cores activated. It is aimed at both unisocket and dual-socket machines, somewhere between a high-end Xeon E3 and a low-end Xeon E5 in the Intel x86 server chip lineup.

The memory controller on the Opteron 4300 supports ultra-low-voltage DDR3 main memory that runs at 1.25 volts, as well as regular 1.5-volt and low-voltage 1.35-volt memory sticks. The processor supports up six memory slots per C32 socket and two memory channels per DIMM for a maximum capacity of 192GB of memory per socket.

The Opteron 4300 has two x16 HyperTransport 3 (HT3) point-to-point links running at 6.4GT/sec linking the two processors in a dual-socket machine together as well as to the chipset and peripherals in the server.

AMD didn't just do a global replace of the Bulldozer cores with the Piledriver cores to make the Opteron 4300s. It made a few changes to the lineup.

For many years, AMD has been shipping four different styles of Opterons. The plain vanilla ones run at the standard voltage and have the standard thermal profiles. The Special Editions, or SEs, run hotter and clock higher and deliver the highest performance, but they are also wickedly expensive and impossible to put into dense servers. The Highly Efficient, or HEs, are a bin sort to find chips that run at significantly lower voltages with slightly lower clock speeds compared to the standard parts, and the Extremely Efficient or EE parts run are a deep bin sort to find parts that run at even lower voltages and lower clock speeds. but which have very low thermals.

The Opteron 6000s series come in SE, regular, and HE variants, while the Opteron 4000s and now Opteron 3000s come in standard, EE, and HE variants.

The interesting thing about the Opteron 4300s is the EE part. With the Opteron 4200, AMD offered an eight-core processor that ran at 1.6GHz with a Turbo Core boost speed of 2.8GHz with a 35-watt thermal envelope; it cost $377. With the Opteron 4300, AMD has down shifted the EE part to only four active cores, but goosed the base clock speed to 2.2GHz and the turbo core speed to 3.0GHz while still staying in that 35-watt thermal envelope.

The other thing to notice is that there are only six Opteron 4300s compared to ten Opteron 4200s, but that doesn't mean much. There are two eight-core parts and three six-core variants, and there are fewer standard and HE SKUs. AMD could add more SKUs in spring 2013, and probably will in February or March when Intel is readying its "Haswell" Xeon E3 chips.

SKU for SKU, the new Opteron 4300s offer the same or around 3 per cent more clock speed than the Opteron 4200s they are most like in the product line, except for that radically different EE part; those chips cost around 10 per cent more. When you add the instruction-level performance to the clock speed gains, you get a chip that has about 10 per cent more oomph for the same increase in cost. Add in compiler tweaks and you can push performance gains up by as much as 15 per cent, says AMD.

This is not the kind of thing that will cause companies to ditch Xeons for Opterons, but by the same token this is probably sufficient to keep Opteron customers who have invested in particular servers adding the new chips to their machines. AMD needs something more dramatic than this to shake up the x86 server biz, and for now it looks like the company is content to have us all wondering what its Opteron ARM plans are.

Made with microservers in mind
The Opteron 3300s, like their predecessors the Opteron 3200s, fit in the AM3+ socket, which is a 942-pin socket that is not precisely compatible with the prior 941-pin AM3 socket and even less so with 940-pin AM2 and AM2+ sockets. (This is not to be confused with the original Socket 940 socket for the Opteron chips from way back when.) What matters to microserver customers is that any machine they have that used an Opteron 3200 can use an Opteron 3300.

There are three models of the Opteron 3300s, one with eight cores and two with four cores, just like with the Opteron 3200s. In general, the base and turbo clock speeds are up by 100MHz to 200MHz and the prices are the same for the top two parts. The Opteron 3300s are aimed at single-socket servers only, and support up to four memory sticks for that socket, with two memory channels. The Opteron 3300 has one x16 HT3 link running at 5.2GT/sec.

The big difference this time around is that there is an HE and an EE part instead of two HE parts. The low-end Opteron 3320 EE is quite a bit different from its predecessor. For one thing, its clock speed is a lot lower, down to 1.9GHz from 2.5GHz, and its thermals have similarly taken a big dive, down to 25 watts peak from 45 watts with the Opteron 3250 HE part. That is what happens when you drop the voltage and the clocks at the same time.

This Opteron 3320 EE chip is clearly aimed at Intel's two-core, four-thread Xeon E3-1220L v2 processor, which fits in a 17-watt thermal envelope and which costs $189.

The low-end Opteron 3300 is perhaps being positioned to compete with the forthcoming "Centerton" Atom processor, as well, which is expected before year's end. Intel is shooting for that to be a 6-watt part, and that means a four-core Opteron 3320 EE has to do four times the work of a Centerton Atom at the same price to compete. We'll see in a few weeks.

What is also interesting is that AMD was pitching the $99 Opteron 3250HE chip at low-end microservers with the goal of helping service providers put together cheap minimalist boxes for hosting (not virtualization, but bare-metal hosting). The Opteron 3320 EE is going to have less performance than its processor – probably somewhere around 20 per cent less is our guess – and yet it costs nearly twice as much.

The thing is, you can get down into a 25-watt power envelope, and AMD clearly thinks it can charge a premium for a "real" x86 processor down in that range. The hosters will still be able to get the older Opteron 3250 HE if they want it, of course.

Now would be a good time for AMD to start telling people about its real plans for future "Steamroller" and "Excavator" Opterons. The engineers had better come up with something good that GlobalFoundries can actually make on time.

SeaMicro servers and Opteron ARM chips cannot save AMD's Opteron business. It has to fight Intel – and win.

Source: http://bit.ly/UcvbBE

New Mac malware uses OS X launch services


A new malware for OS X also acts as a reminder to monitor the launch services in OS X as a security precaution.

Security company Intego is reporting the discovery of a new malware package for OS X. The package is a Trojan horse called OSX/Dockster.A, that appears to have keylogging features to record what is being typed on an infected system in addition to remote-access features for backdoor access into the system. When installed, the Trojan attempts to contact the server "itsec.eicp.net," likely to receive instructions for allowing remote access to the system.

As with other recent malware for OS X, Dockster is a Java-based threat that will not run unless you have Java installed on your system. It also currently uses the patched CVE-2012-0507 vulnerability in Java that was used by the Flashback malware in OS X, and appears to be in a testing phase. As a result, this Trojan is a minimal threat; however, its mode of infection offers a reminder to OS X users that simply monitoring the system launcher configurations on your Mac can be an easy way to determine if malware or other unwanted software is being installed on your computer.

As with other OS X malware, this new Trojan utilizes launch agents, which are small configuration files that tell the launcher processes in the system (one that runs globally and another that runs for each log-in session) to automatically and conditionally start or stop various background routines. To do so, a developer simply has to create a properly formatted configuration file and drop it into one of the folders monitored by the launcher process. After doing this, the next time the system is restarted or if you log out and log back in, the task will load and run.

The default folders the launcher uses are called "LaunchDaemons" and "LaunchAgents," and are located directly inside the Library folders either for the system, for all users, or for individual users. While a Trojan or other malware will need administrative access (e.g., prompt for a password) to install a configuration file in global resources such as the system folder or in the library folder for all users, it can write to the LaunchAgents folder for your personal account without prompting you, resulting in a targeted process or routine running when you log into your account.

In this case, a launch agent file called "mac.Dockset.deman" is created in the user's account; it has the system launch the Trojan at log-in, and when run will load an executable that appears as ".Dockset" in Activity Monitor.

While the use of these launcher configuration files makes it easy for malware developers to have programs launch automatically, it also makes it easy to detect this malicious behavior. By setting up a folder monitoring service, you can have the system notify you if a file has been added to these folders, so you can check it out and further investigate its origin and function.

While you can write custom scripts and programs to do this, Apple does have all the components necessary for monitoring a folder built into OS X. In order to run Applescript routines when a folder's contents are changed, Apple provides a service called Folder Actions that can be set up to monitor folders on the file system. This service can be used to bind some built-in scripts Apple provides as examples of AppleScript functions to a folder in order to monitor the contents of the various launch agent and launch daemon folders in the system, and prompt you with a warning whenever a file is added or removed.

AppleScript is relatively behind the scenes in OS X, so setting this service does take a couple of steps to complete, but everything you need to do it is available on the system, making it a relatively painless process. I recently outlined how to use this service to monitor launch agent folders, which I recommend all Mac users do for their systems to ensure they are aware of what is being configured to run automatically on their systems.

Source: http://cnet.co/TyBTDd

New Mac malware spreading from Dalai Lama tribute site


"Dockster" takes advantage of the same vulnerability exploited by the "Flashback" malware, which infected more than 600,000 computers.

A new piece of Mac malware has been discovered on a Web site linked to the Dalai Lama, using a well-documented Java exploit to install a Trojan on visitors' computers and steal personal information.

Dubbed "Dockster," the malware was found lurking on Gyalwarinpoche.com, according to security research firm F-Secure. The malware takes advantage of the same vulnerability exploited by the "Flashback" malware to install a basic backdoor that allows the attacker to download files and log keystrokes.

Although "Dockster" leverages an exploit that has already been patched, computers not updated or running older software may still be at risk. F-Secure notes that this is not the first time Gyalwarinpoche.com has been compromised and warns that Mac users aren't the only ones who should avoid visiting the site; Windows malware has also been detected on it.

At its height, the original Flashback, which was designed to grab passwords and other information from users through their Web browser and other applications, was estimated to be infecting more than 600,000 Macs. The original malware, first detected in fall 2011, typically installed itself after a user mistook it for a legitimate browser plug-in while visiting a malicious Web site. The malware would then collect personal information and send it back to remote servers.

Source: http://cnet.co/QDliRE

Thursday, November 29, 2012

Adobe could unveil Retina version of Photoshop CS6 on Dec. 11


Adobe has a few tidbits in store for its Create Now Live event on December 11. Could a peek at the Retina version of Photoshop be one of them?

Adobe Systems is hosting a free online event on December 11 where it may reveal the new Retina edition of its flagship Photoshop program.

One of the topics of the Create Now Live event invites participants to "See what's next in Adobe Photoshop." And a YouTube video promoting the Photoshop presentation appears to show someone using the software on a Retina Display MacBook Pro.

That video clip has led Japanese blog site Macotakara and others to speculate that Adobe will show off a new update of Photoshop CS6 designed to support the high-resolution display on the 13-inch and 15-inch Retina MacBook Pros.

This past August, Adobe announced that it would update Photoshop and other Creative Suite 6 products to support Apple's Retina Display. The company promised the update to Photoshop would arrive this fall. And since fall ends December 20, it seems a fair bet that the Retina version of the famed photo editor will be part of the event's agenda.

CNET contacted Adobe for comment and will update the story if we receive any information.

Create Now Live was originally scheduled for December 5 but was pushed back to December 11.

The event will run from 10:00 a.m. to 1:30 p.m. PT, and will be live-streamed on the Adobe Creative Cloud Facebook page.

Anyone who wants to attend the virtual event can sign up at Adobe's registration page. Beyond presentations on Photoshop and Adobe Creative Cloud, the event will kick off with a keynote speech by Jeffrey Veen, vice president of Adobe products, and close out with a Q&A session.

Source: http://cnet.co/TqWXvy

Apple zaps Thunderbolt glitches with firmware update


The small update fixes communications problems with some Thunderbolt devices.

Apple has released a new update for 2012 MacBook Pro systems that fixes problems with the handling of bus-powered Thunderbolt devices.

Thunderbolt is the next-generation I/O technology that Apple is implementing in its Mac systems, which allows very high-bandwidth communication between devices, and also allows for expansion of the PCIe (PCI Express) bus as well as carrying the DisplayPort signal for external monitors.

As Thunderbolt is relatively new, some bugs are bound to crop up in various implementations, and with the MacBook Pro systems produced in mid-2012 it's been found that some bus-powered Thunderbolt devices may not work properly.

To tackle this, Apple has issued a small firmware update that should allow bus-powered devices to communicate properly with these systems. The update is a small 442KB download that should be available through the Mac App Store's Software Update service (available by selecting Software Update in the Apple menu). Alternatively, you can download the updater from Apple's Firmware Update v1.1 Web page and install it manually.

The update requires OS X 10.7.4 or later to install, and will require a restart of the computer during which you should be sure it is connected to a reliable power source and not interrupted during the installation process.

While Software Update should determine if your system qualifies for the update, you can also determine this by choosing "About this Mac" from the Apple menu and then clicking "More Info..." to see the model number of your Mac. If you see "Mid 2012" listed under the type of Mac it is, then the update applies to your system. Alternatively, you can download the updater and run it, and if your system needs it then it will continue and install, whereas if not then the installer will alert you that your system does not qualify.

Saturday, November 24, 2012

Google Launches Groups Migration API To Help Businesses Move Their Shared Mailboxes To Google Apps


Many businesses use shared mailboxes, public folders and discussion databases that, over the years, accumulate a lot of institutional knowledge. Most cloud-based email services don’t offer this feature, though, making it hard for some companies to move away from their legacy systems. But Google’s new Google Apps Groups Migration API now allows developers to create tools to move shared emails from any data source to their internal Google Groups discussion archives.

Setting up this migration process is likely a bit too involved for a small business without in-house developers, but it is very flexible and, as Google notes, “provides a simple and easy way to ‘tag’ the migrated emails into manageable groups that can be easily accessed by users with group membership.”

The Migration API is limited to 10 queries per second per account and half a million API request per day. The maximum size of a single email, including attachments, is 16MB.

This new API is mostly a complement to the existing Google Apps Provisioning API, which helps businesses create, retrieve and update their users’ accounts on the service, as well as the Google Apps Groups Settings API. The Provisioning API also includes a number of methods to work with Google Groups, but doesn’t currently feature any tools for migrating existing emails and accounts to the service.

Source: http://tcrn.ch/TconVC

No-Sweat Windows 8 Networking


You can share pictures, music, videos, documents and printers on your home or small business network by setting up a HomeGroup. You can select the libraries -- e.g., My Pictures or My Documents -- that you want to share, and you can restrict certain folders or files to prevent sharing. You can set up a password to prevent others from changing the share settings you designate.

Easy sharing and streaming of media and other assets among computers at home or in the workplace is a good reason to get networked. Other benefits include sharing Internet connections.

If you've been using email as a method for distributing files among family members or a small business, or are simply taking the Windows 8 plunge, Microsoft has never made it easier -- particularly in a Windows 7 migration, or an all-Windows 8 setup.

If you're already sharing an Internet connection, much of the work is completed for you.

Step 1: Connect Your Windows 8 PCs to the Internet

Perform this connection wirelessly or by using a wired connection, depending on the capabilities of the router. Wired connections are more robust.

Open the Settings charm on the Windows 8 PC by moving your mouse pointer to the lower right corner of the desktop and clicking on the "Settings" cogwheel-like icon. Then click on the wireless network icon -- it looks like a set of mobile phone signal bars -- or the wired network icon that looks like a small monitor screen.

Choose the network available and click "Connect." Enter the appropriate network security key if prompted.

Tip: Inspect your Internet connection device to determine the capabilities. A router will have a number of physical ports that can be used for a wired connection; an antenna is a dead giveaway that there are wireless capabilities. Some wireless routers don't have external antennas. If there's only one port, it's a modem, not a router, and you'll need to buy a router.

Step 2: Turn Sharing On

Turn network sharing on by right-clicking on the wireless or wired network icon from the previous step. Then click "Turn sharing on or off" and choose "Yes, turn on sharing and connect to devices."

Perform this step on each of the computers to be networked.

Tip: When sharing shares files and other assets, be sure you trust the other network users.

Step 3: Verify the PCs Are Added

Sign on to any of the PCs and point to the bottom right corner of the Start page, as before. Then enter the search term "Network" in the Search text box.

Click on the Network icon that will appear, and you'll see the PCs that have been shared.

Step 4: Join or Create a HomeGroup

Windows 8 automatically creates a HomeGroup, which is a set of PCs that are allowed to share files and printers.

Choose "Change PC Settings" from the cog-like "Settings" charm you used earlier and then click on HomeGroup. Then create or join a HomeGroup. You can obtain any existing passwords from the same settings location on any existing PCs that are already part of the HomeGroup you want to join.

Tip: To obtain an existing HomeGroup password from a Windows 7 machine, go to the Control Panel on that PC and choose "Network and Internet." Then choose HomeGroup and select "View or print the HomeGroup password."

Enter that password on the new Windows 8 machine to join the new machine to the legacy Windows 7 HomeGroup.

Step 5: Choose the Libraries and Devices to Share

Then perform this step on all of the computers you want to network.

Tip: Windows RT, the tablet version of Windows 8, won't allow you to create a HomeGroup -- you can only join one.

Step 5: Tweak the HomeGroup Settings

Choose and change the permissions on libraries and devices you want to share or not share by returning to the HomeGroup PC Settings area on the Windows 8 PC you want to tweak -- or in the case of a Windows 7 PC, the HomeGroup options area of the Network and Internet section of the Control Panel.

Tip: Use the search text box function from the earlier step to look for individual files or folders you want to share. Choose the asset, open the file location, and then click the "Share" tab to make permissions changes.

Source: http://bit.ly/UrJYsn

New internet name objections filed by government panel


More than 250 "early" objections to proposed new internet address endings have been filed by a panel representing about 50 of the world's governments.

The list includes references to groups of people or locations including .roma, .islam, .patagonia, .africa and .zulu.

There are also concerns about proposed suffixes covering broad sectors such as .casino, .charity, .health, .insurance and .search.

The list was submitted ahead of a planned rollout next year.

The panel - known as the Government Advisory Committee (Gac) - has published its "early warning" list on the web to give applicants a chance to address its concerns or choose to withdraw their submission and reclaim 80% of their $185,000 (£116,300) application fee.

Gac will then decide in April which of the suffixes warrant formal complaints if it still has outstanding concerns, at a meeting in Beijing.

Each warning on the list makes reference to the country which filed the objection. A suffix was only added to the register if no other members of Gac objected to its inclusion.

Anti anti-virus
The organisations and suffixes referred to on the list included:


  • Amazon for its applications for .app, .book, .movie, .game and .mail among others.
  • Google for .search, .cloud and .gmbh (a reference to a type of limited German company).
  • Johnson & Johnson for .baby.
  • L'Oreal for .beauty, .hair, .makeup, .salon and .skin.
  • The Weather Channel for .weather.
  • Symantec for .antivirus.
  • eHow publisher, Demand Media, for .army, .airforce, .engineer and .green.

Despite the large number of objections, Icann (Internet Corporation for Assigned Names and Numbers) - the internet name regulator in charge of the rollout - has indicated that it still believed it would be able to release the first suffixes for use by May 2013.

The organisation does not have to comply with the governments' wishes, but must provide "well-reasoned arguments" if it decides to deny any rejection request.

Competition concerns
A range of reasons were given for the early warnings.

France raised concerns about seven organisations that had applied for .hotel or .hotels on the grounds that it believed that if the suffix was introduced it should be reserved for hotel businesses.

"The guarantee of a clear information of the customer on hotel accommodation services is the best way to promote the tourism industry," it said. "Behind the term hotel as a generic denomination, any customer in the world must have the guarantee that will be directly connected to a hotel."

It is feasible that some of the applicants might be able to give this guarantee, allowing the ultimate owner of the suffix to profit by charging individual businesses to register their hotel websites.

But other issues may be harder to resolve.

For example, Australia objected to Amazon's application for the Japanese-language suffix meaning "fashion" on the grounds it might give the firm an unfair advantage.

"Restricting common generic strings for the exclusive use of a single entity could have unintended consequences, including a negative impact on competition," the country's government wrote.

Religions and reputations
An objection by the United Arab Emirates to Asia Green IT System's application for .islam may also be impossible to reconcile.

"It is unacceptable for a private entity to have control over religious terms such as Islam without significant support and affiliation with the community it's targeting," it said.

"The application lacks any sort of protection to ensure that the use of the domain names registered under the applied for new gTLD (generic top-level domain) are in line with Islam principles, pillars, views, beliefs and law."

Other problems stemmed from more commercial concerns.

For example. Samoa is opposed to three applications for the suffix .website on the grounds that it is too similar to the .ws suffix it already controls, which provides the South Pacific country with revenue.

Corporate reputations emerged as another sticking point.

Australia has challenged applications for .gripe, .sucks and .wtf on the basis they had "overtly negative or critical connotations" which might force businesses to feel they had to pay to register their brands alongside the suffix to prevent anyone else from doing so.

The only address that the UK filed an objection to was .rugby.

It objected to two applicants which it said did "not represent the global community of rugby players, supporters and stakeholders".

The UK suggested the proposals be rejected in favour of a third submission for the suffix from the International Rugby Board.

Source: http://bbc.in/SRB1c0

Intel's Two-Pronged Evolution


Intel's new Itanium 9500 and Xeon Phi coprocessors are impressive, evolutionary steps for the company and myriad current customers. However, the new processors also cast considerable light on how Intel will succeed in developing and delivering innovative solutions for core existing and emerging new markets.

It's hard to think of an IT vendor with a stronger leadership position than Intel, but the company is having trouble shaking off the perception that it is on the ropes or headed for disaster in some of its core markets.

On one hand, Intel's mission-critical Itanium platform suffered when Oracle and HP publicly butted heads in an altercation that ended up in court. On the other, the use of graphics processors in high-performance computing and supercomputing applications has caused some to doubt Intel's future in those markets.

Both of these issues are reflected in the company's new Itanium 9500 and Xeon Phi announcements, though the light each casts on Intel is significantly different.

Partner Problems

Regarding Itanium, Intel was stuck between the rock and hard place of two significant partners -- HP and Oracle -- when the latter claimed the platform was headed toward demise. Both HP -- by far, the largest producer of IA-64 systems -- and Intel denied this vociferously. In fact, Oracle's claims contradicted numerous Itanium roadmaps and publicly stated strategies.

However, Oracle refused to back down and said it would not develop future versions of its core database products for the platform. HP fought back, noting an agreement Oracle signed after hiring its former CEO, Mark Hurd, and in August the judge overseeing the case issued a ruling supporting HP. Though Oracle said it will appeal, it also resumed Itanium development and support.

So where do things stand today? Along with providing a significant performance boost over previous-generation processors -- a point that will please loyal IA-64 customers and OEMs -- Intel's new Itanium 9500 is also likely to bolster HP's claims against Oracle and the case for the platform's health and well-being. That isn't just because of the Itanium 9500's capabilities, which are formidable, but also due to Intel's new Modular Development Model, which aims to create common technologies for both the Itanium and Xeon E7 families.

That will certainly add to the mission-critical capabilities of Xeon E7 and it should also take a significant bite out of the cost of developing future Itanium solutions. In the end, not only does Intel's Itanium 9500 deliver the goods for today's enterprises; it also represents a significant advance in Itanium's long term prospects.

The High End

A completely different corner of the IT market -- HPC and supercomputing -- is at the heart of Intel's new Xeon Phi coprocessors. The latest top-ranked system on the Top500.org list, Titan, at the DOE's Oak Ridge Laboratory, is based on AMD Opteron CPUs and NVIDIA GPUs. It certainly is the foremost event in the trend of using GPUs for parallel processing chores, but other systems using similar technologies are also cropping up.

A curious thing about supercomputing is that while these systems deliver eye-popping performance and bragging rights for owners and vendors, their immediate effect on mainstream computing is less significant. Sure, such technologies eventually find their way into commercial systems, but those are typically not high-margin products for OEMs and many customers -- particularly those in the public sector, such as universities -- aren't exactly rolling in dough.

In fact, maximally leveraging existing resources, including the knowledge and training of programmers, technicians and managers, is crucial for keeping these facilities up, running and solvent.

That's a key point related to the Xeon Phi coprocessors, the first commercial iteration of Intel's longstanding MIC development effort. Not only do the new Xeon Phi solutions deliver impressive parallel processing capabilities, they do so in a notably efficient power envelope.

Power Saver

The Intel Xeon/Phi-based Beacon supercomputer at the University of Tennessee is the most energy-efficient system on the latest Top500.org list. More importantly, however, Xeon Phi supports programming models and tools that are common in Intel Xeon-based HPC and supercomputing systems.

That will be welcome news to the owners of high-end Xeon-based systems, which constitute more than 75 percent of the current Top500.list, but the effect should also ripple downstream into the commercial HPC and workstation markets, benefiting end users, their vendors and developers.

Overall, Intel's new Itanium 9500 and Xeon Phi coprocessors are impressive, evolutionary steps for the company and myriad current customers. However, the new processors also cast considerable light on how Intel will succeed in developing and delivering innovative solutions for core existing and emerging new markets.

Source: http://bit.ly/S21Agw

Mozilla quietly ceases Firefox 64-bit development


Mozilla's engineering manager has requested that developers stop work on Windows 64-bit builds of Firefox.

Mozilla engineering manager Benjamin Smedberg has asked developers to stop nightly builds for Firefox versions optimized to run on 64-bit versions of Windows.

A developer thread posted on the Google Groups mozilla.dev.planning discussion board, titled "Turning off win64 builds" by Smedberg proposed the move.

Claiming that 64-bit Firefox is a "constant source of misunderstanding and frustration," the engineer wrote that the builds often crash, many plugins are not available in 64-bit versions, and hangs are more common due to a lack of coding which causes plugins to function incorrectly. In addition, Smedberg argues that this causes users to feel "second class," and crash reports between 32-bit and 64-bit versions are difficult to distinguish between for the stability team.

Users can still run 32-bit Firefox on 64-bit Windows.

Although originally willing to shelve the idea for a time if it proved controversial, Smedberg later, well, shelved that idea:

Thank you to everyone who participated in this thread. Given the existing information, I have decided to proceed with disabling windows 64-bit nightly and hourly builds. Please let us consider this discussion closed unless there is critical new information which needs to be presented.

The engineer then posted a thread titled "Disable windows 64 builds" on Bugzilla, asking developers to "stop building windows [sic] 64 builds and tests." These include the order to stop building Windows 64-bit nightly builds and repatriate existing Windows 64-bit nightly users onto Windows 32-bit builds using a custom update.

In order to stave off argument, even though one participant suggested that 50 percent of nightly testers were using the system, perhaps as an official 64-bit version of Firefox for Windows has never been released, Smedberg said it was "not the place to argue about this decision, which has already been made."

Source: http://cnet.co/T6YHtm

Wednesday, November 21, 2012

Apple may name its next version of OS X 'Lynx'


Apple will continue its catty theme of OS X names by referring to OS X 10.9 as Lynx, claims a new report.

Mac OS X 10.9 could be dubbed "Lynx," says blog site AppleScoop.

The rumor sounds plausible. It would continue Apple's trend of naming its OS after ferocious felines. Over the past few years, OS X has leaped from Leopard to Snow Leopard to Lion and then to Mountain Lion.

However, the intel is decidedly second-hand.

The information comes from a "reliable source" who claims to have talked to someone inside Apple. The person reportedly saw some internal papers that indicated Apple was finalizing the name of OS X 10.9. But the source couldn't say when Apple would actually decide on the name or reveal it to the public.

Lynx, however, is on the list of available names for OS X. Way back in 2003, Apple trademarked several related names, including Lynx, Cougar, Leopard, and Tiger, MacRumors reported at the time. The company has already bagged Leopard and Tiger, so both Lynx and Cougar are still available.

A name can sometimes prove tricky, even one that's trademarked. Apple was sued by computer retailer Tiger Direct in 2005 over use of the name Tiger. But the judge eventually found in favor of Apple.

AppleScoop suggests that Apple may unveil its name for OS X 10.9 at next year's Worldwide Developers Conference in June.

Little is known about the next flavor of OS X at this point, though Apple may import a couple of features from iOS. A report out today from 9to5Mac says that OS X 10.9 will include the voice assistant Siri and support for Apple Maps. Based on the increased pace of the last two updates, the new OS could debut next year.

Mac Mini users unable to install OS X 10.8.2


The special 10.8.2 updater for 2012 Mac Mini systems is currently not available from the Mac App Store.

If you own one of Apple's latest 2012 Mac Mini systems and are attempting to update it to the latest version of OS X, you may find the update is not available via the App Store's Software Update service. Furthermore, if you download the manual updater for 10.8.2 and attempt to install it, you will see an error that states: "Error: OS X Update can't be installed on this disk. This volume does not meet the requirements for this update."

The reason for this error is the 2012 Mac Mini requires a special version of the 10.8.2 update, which Apple pulled from the App Store last Friday and has not yet released a new one.

While some have speculated Apple may have pulled the update for the upcoming 10.8.3 release, while Apple has removed prior versions of OS X such as OS X Lion from the App Store after Mountain Lion was released, this is very unlikely as the new release is not yet available and is still under testing. Furthermore, Apple still maintains update packages so older versions of OS X can still be updated to their latest versions.

Apple has not issued an explanation as to why the update was pulled, but it is more likely because of a glitch in the updater or a problem with the App Store, and the update will likely be reissued soon once the problem is fixed. However, until this happens, affected users will be at a small disadvantage because they will not be able to install other recent software updates that requires OS X 10.8.2 such as those for iPhoto and Aperture.

If you have a 2012 Mac Mini system and the OS X 10.8.2 update is not showing up in your Software Update list, then for now the only option is to wait. Do not try to install the manual delta or combo updaters that are available online, as these will not work and will instead result in an error. Hopefully Apple will address the problem ASAP and allow Mac Mini users to get the latest version of OS X.

Source: http://cnet.co/Q5Da7i

How to restart a FileVault-protected Mac remotely


How to restart a FileVault-protected Mac remotely

If necessary, you can restart a FileVault-enabled Mac and have it automatically unlock the volume and load the operating system.

OS X's encryption service, FileVault, originally stored users' home folder contents in encrypted disk images. In OS X Lion, FileVault now uses Apple's new CoreStorage volume manager to encrypt the entire disk. With CoreStorage, the OS configures a small hidden partition with a preboot welcome screen that looks like the standard OS X log-in window and contains user accounts that are authorized to unlock the volume and cause the system to load and automatically log in to the account specified on the preboot screen.

Unfortunately, while more secure and while offering a relatively seamless experience when sitting at your computer, the preboot authentication requirement for FileVault does pose a bit of a problem for those who access their systems remotely, such as through Screen Sharing (using Back To My Mac) or through SSH and other remote-access technologies.

If you make a configuration change and need to restart the system, the computer will require preboot authentication before the system and any remote-access services load. In effect, this creates a bit of a hurdle for those who wish to keep their systems secure with FileVault but who also want to be able to restart their systems remotely.

Luckily, Apple does provide a way to restart a FileVault-encrypted system and have it boot back to a working state. To do this, open the Terminal and run the following command:

sudo fdesetup authrestart

This command will ask for the current user's password or the recovery key for the FileVault volume, and then store the current user's credentials so when the system is restarted the computer can use these credentials to unlock the volume at the preboot screen. This means when the system reboots it will automatically unlock the volume so the OS will load, dropping you at the standard log-in window so you can log in to the user account of your choice.

This approach to restarting a system is useful if you have made manual changes to a FileVault-protected system, but also if the system has software updates available for it that are automatically installed. While the App Store or Software Update service will prompt you to restart the system, avoiding these prompts and using the above command will apply the updates and restart the system to a usable state for remote access.

In addition to aiding in remote management of a system, this command can be used locally to restart a system without needing to manage the preboot authentication screen again. If you are configuring updates on a local server and simply need to restart it to a working state, then you can issue this command and move on to other tasks instead of having to wait for it to restart and then manually unlock the encrypted boot drive.

This command does require administrative access to run, and you need to know either the password of a FileVault-enabled user account (likely the same admin account) or the recovery key for the FileVault volume that is displayed for you when you enable FileVault. These credentials are stored in memory for the restart process, but are then cleared when the system boots. As a result, while some may have concerns about such commands providing a means around the system's standard security measures, the command should maintain the same security requirements for FileVault.



Monday, November 19, 2012

New malware variant recognizes Windows 8, uses Google Docs as a proxy to phone home


Windows 8 may block most malware out of the box, but there is still malware out there that thwarts Microsoft’s latest and greatest. A new Trojan variant, detected as Backdoor.Makadocs and spread via RTF and Microsoft Word document marked as Trojan.Dropper, has been discovered that not only adds a clause to target Windows 8 and Windows Server 2012, but also uses Google Docs as a proxy server to phone home to its Command & Control (C&C) server.

Symantec believes the threat has been updated by the malware author to include the Windows 8 and Windows Server 2012 references, but doesn’t do anything specific for them (yet). This is no surprise: the two operating systems were released less than a month ago but of course they are already popular, and cybercriminals are acting fast.

Yet the more interesting part is the Google Docs addition. Backdoor.Makadocs gathers information from the compromised computer (such as host name and OS type) and then receives and executes commands from a C&C server to do further damage.

In order to do so, the malware authors have decided to leverage Google Docs to ensure crystal clear communications. As Google Docs becomes more and more popular, and as businesses continue to accept it and allow the service through their firewalls, this method is a clever move.

The reason this works is because Google Docs includes a “viewer” function that retrieves resources of another URL and displays it, allowing the user to view a variety of file types in the browser. In violation of Google’s policies, Backdoor.Makadocs uses this function to access its C&C server, likely in the hopes of preventing the link to the C&C from being discovered since Google Docs encrypts its connection over HTTPS.

Symantec says “It is possible for Google to prevent this connection by using a firewall.” Since the document does not leverage vulnerabilities to function (it relies on social engineering tactics instead) it’s unlikely Google will be able to do much beyond participating in a game of cat and mouse with the malware authors.

Nevertheless, we have contacted Google and Microsoft about this issue. We will update this article if and when we hear back.

Update at 4:30PM EST: “Using any Google product to conduct this kind of activity is a violation of our product policies,” a Google spokesperson said in a statement. “We investigate and take action when we become aware of abuse.”

Source: http://tnw.co/S5ATHu

Microsoft Offers Guide to Adapting Your Site for IE 10



Microsoft’s Windows Phone 8 offers much better HTML5 support than its predecessors thanks to the Internet Explorer 10 web browser.
Unfortunately, if you’ve been building WebKit-centric sites IE 10 users won’t be able to properly view your site, which is why Microsoft has published a guide to adapting your WebKit-optimized site for Internet Explorer 10.
If you’ve been following CSS best practices, using prefixes for all major browsers, along with the unprefixed properties in your code, then there’s not much to be learned from Microsoft’s guide (though there are a couple of differences in touch APIs that are worth looking over).
But if you’ve been targeting WebKit alone, Microsoft’s guide will get your sites working in IE 10, WebKit, and other browsers that have dropped prefixes for standardized CSS properties.
Sadly, even some the largest sites on the web are coding exclusively for WebKit browsers like Chrome, Safari and Mobile Safari. The problem is bad enough that Microsoft, Mozilla and Opera are planning to add support for some -webkit prefixed CSS properties.
In other words, because web developers are using only the -webkit prefix, other browsers must either add support for -webkit or risk being seen as less capable browsers even when they aren’t. So far Microsoft hasn’t carried through and actually added support for -webkit to any versions of IE 10, but Opera has added it to its desktop and mobile browsers.
Microsoft’s guide to making sites work in IE 10 for Windows Phone 8 also covers device detection (though it cautions that feature detection is the better way to go) and how to make sure you trigger standards mode in your testing environment, since IE 10 defaults to backward-compatibility mode when used on local intranets.
For more details on how to make sure your site works well in IE 10 for Windows Phone 8, head on over to the Windows Phone Developer Blog (and be sure to read through the comments for a couple more tips).


Wednesday, November 14, 2012

Storage buying guide


Looking for new storage devices and need some tips? You're in the right place.

Information runs the computing world, and handling it is crucial. So it's important that you select the best storage device to not only keep your data, but also distribute it. In this guide, I'll explain the basics of storage and list the features that you should consider when shopping. But if you're ready to leave for the store right now, here are my top picks.

Hard-core users hoping to get the most out of a home storage solution should consider a network-attached storage (NAS) server from Synology, such as the DS1511+, DS412+, or the DS213air. Both offer superfast speeds, a vast amount of features, and state-of-the-art user interfaces and are more than worth the investment.

Alternatively, if you want to make your computer faster, a solid-state drive (SSD) such as the Samsung 830, the Intel 520, or the OCZ Vertex 4 will significantly boost speeds of your current hard-drive-based system.

Do you have information that you just can't afford to lose? Then I'd recommend the heavyweight, disaster-proof IoSafe Solo G3. You can't damage this drive even if you try. However, if you just want to casually extend your laptop's storage space, a nice and affordable portable drive, such as the Seagate Backup Plus, the WD My Passport, or the Buffalo MiniStation Thunderbolt will do the trick.

Now if you want to know more about storage, I invite you to read on. On the whole, there are three main points you should consider when making your list: performance, capacity, and data safety. I'll explain them briefly here. And after you're finished, check out this related post for an even deeper dive into the world of storage.

Performance
Storage performance refers to the speed at which data transfers within a device or from one device to another. Currently, the speed of a single consumer-grade internal drive is defined by the Serial ATA (SATA) interface standard, which determines how fast internal drives connect to a host (such as a personal computer or a server), or to one another. There are three generations of SATA, with the latest, SATA 3, capping at 6Gbps (or some 770MBps). The earlier SATA 1 and SATA 2 versions cap data speeds at 1.5Gbps and 3Gbps, respectively.

So what do those data speeds mean in the real world? Well, consider that at top speed, an SATA 3 drive can transfer a CD's worth of data (about 700MB) in less than one second. The actual speed may be slower due to mechanical limitations and overheads, but you get the idea of what's possible. Solid-state drives (SSDs), on the other hand, offer speeds much closer to the top speed of the SATA standard. Most existing internal drives and host devices (such as computers) now support SATA 3, and are backward-compatible with previous revisions of SATA.

Since internal drives are used in most other types of storage devices, including external drives and network storage, the SATA standard is the common denominator of storage performance. In other words, a single-volume storage solution -- one that has only one internal drive on the inside -- can be as fast as 6Gbps. In multiple-volume solutions, there are techniques that aggregate the speed of each individual drive into a faster combined data speed, but I'll discuss that in more detail below.

Capacity
Capacity is the amount of data that a storage device can handle. Generally, we measure the total capacity of a drive or a storage solution in gigabytes (GB). On average, one GB can hold about 500 iPhone 4 photos, or about 200 iTunes digital songs.

Currently, the highest-capacity 3.5-inch (desktop) internal hard drive can hold up to 4 terabytes (TB) or 4,000GB. On laptops, the top hard drive has up to 2TB of space and solid-state drive (SSD) can store up to 512GB before getting too expensive to be practical.

While a single-volume storage solution's capacity will max out at some point, there are techniques to combine several drives together to offer dozens of TB, and even more. I'll discuss that in more detail below.

Data safety
Data safety depends on the durability of the drive. And for single drives, you also have to consider both the drive's quality and how you'll use it.

Generally, hard drives are more susceptible to shocks, vibration, heat, and moisture than SSDs. For your desktop, durability isn't a big issue since you won't move your computer very often. For a laptop, however, I'd recommend an SSD or a hard drive that's designed to withstand shocks and sudden movements.

For portable drives, you can opt for a product that comes with layers of physical protection, such as the LaCie Rugged Thunderbolt, the IoSafe Rugged Portable, or the Silicon Power Armor A80. These drives are generally great for those working in rough environments.

But even when you've chosen the optimal drive for your needs, don't forget to use backup, redundancy, or both. Even the best drive is not designed to last forever, and there's no guarantee against failure, loss, or theft.

The easiest way to back up your drive is to regularly put copies of data on multiple storage devices. Many external drives come included with automatic backup software for Windows. Macs users, on the other hand, can take advantage of Apple's Time Machine feature. Note that all external drives work with both Windows and Macs, as long as they are formatted in the right file system: NTFS for Windows or HFS+ for Macs. The reformatting takes just a few seconds. For those who are on a budget, here's the list of top five budget portable drives.

Yet, this process isn't foolproof. Besides taking time, backing up your drive can leave small windows in which data may be lost. That's why for professional and real-time data protection, you should consider redundancy.

RAID
The most common solution for data redundancy is RAID, which stands for redundant array of independent disks. RAID requires that you use two internal drives or more, but depending on the setup, a RAID configuration can offer faster speeds, more storage space, or both. Just note that you'll need to use drives of the same capacity. Here are the three most common RAID setups.

RAID 1
Also called mirroring or striping, RAID 1 requires at least two internal drive drives. In this setup, data writes identically to both drives simultaneously, resulting in a mirrored set. What's more, a RAID 1 setup continues to operate safely even if only one drive is functioning (thus allowing you to replace a failed drive on the fly). The drawback of RAID 1 is that no matter how many drives you use, you get the capacity of only one. RAID 1 also suffers from slower writing speeds.

RAID 0
Like RAID 1, RAID 0 requires at least two internal drives. Unlike RAID 1, however, RAID 0 combines the capacity of each drive into a single volume while delivering maximum bandwidth. The only catch is that if one drive dies, you lose information on all devices. So while more drives in a RAID 0 setup means higher bandwidth and capacity, there's also a greater risk of data loss. Generally, RAID 0 is used mostly for dual-drive storage solutions. And should you chose RAID 0, backup is a must.
For a storage device that uses four internal drives, you can use a RAID 10 setup, which is the combination of RAID 1 and RAID 0, for both performance and data safety.

RAID 5
This setup requires at least three internal drives, but it distributes data on all drives. Though a single drive failure won't result in the loss of any data, performance will suffer until you replace the broken device. Still, because it balances storage space (you lose the capacity of only one drive in the RAID), performance, and data safety, RAID 5 is the preferred setup.

Most RAID-capable storage devices come with the RAID setup preconfigured; you don't need to configure the RAID setup yourself.

Now that you've learned how to balance performance, capacity, and data safety, let's consider the three main types of storage devices: internal drives, external drives, and network-attached storage (NAS) servers.

Internal drives
Though they share the same standard the same SATA interface, the performance of internal drives can vary sharply. Generally, hard drives are much slower than SSDs, but SSDs are much more expensive than hard drives, gigabyte-to-gigabyte.

That said, if you're looking to upgrade your system's main drive (the one that hosts the operating system), it's best to get an SSD. You can get an SSD with a capacity of 256GB or less (currently cost some $200 or less), which is enough for a host drive. You can always add more storage via an external drive, or in the case of a desktop, another regular secondary hard drive.

Though not all SSDs offer the same performance, the differences are minimal. To make it easier for you to choose, here's the list of current best five internal drives.

External drives
External storage devices are basically one or more internal drives put together inside an enclosure and connected to a computer using a peripheral connection.

There are four main peripheral connection types: USB, Thunderbolt, FireWire, and eSATA. Most, if not all, new external drives now use just USB 3.0 or Thunderbolt or both. There are good reasons why.

USB 3.0 offers the cap speed of 5Gbps and is backward-compatible with USB 2.0. Thunderbolt caps at 10Gbps, and you can daisy chain up to six Thunderbolt drives together without degrading the bandwidth. Thunderbolt also allows for RAID when you connect multiple single-volume drives of the same capacity. Note that there are more computers that support USB 3.0 than Thunderbolt, especially among Windows computers. All existing computers support USB 2.0, which also works with USB 3.0 drives (though at USB 2.0 data speed).

Generally, speed is not the most important factor for non-Thunderbolt external drives. That may seem counterintuitive, but the reason is that the USB 3.0 connectivity standard, which is the fastest among all non-Thunderbolt solutions, is slower than the speed of SATA 3 internal drives.

Capacity, however, is a bigger issue. USB external drives are the most affordable external storage devices on the market and they come with a wide range of capacities to fit your budget. Make sure you get a drive that offers at least the same capacity as your computer. Check out the list of best five external drives for more information.

Currently, Thunderbolt storage devices are more popular for Macs and, unlike other external drives, deliver very fast performance. Yet, they are significantly more expensive than USB 3.0 drives, with prices fluctuating a great deal depending on the number of internal drives you use. Here's the top five Thunderbolt drives on the market.

Network-attached storage (NAS) devices
A NAS device (aka NAS server) is very similar to an external drive. Yet, instead of connecting to a computer directly, it connects to a network via a network cable (or Wi-Fi) and offers storage space to the entire network at the same time.

As you might imagine, NAS servers are ideal for sharing a large amount of data between computers. Besides storage, NAS servers offer many more features, including (but not limited to) the ability to stream digital content to network players, downloading files on its own, back up files from network computer, sharing data over the Internet, and much more.

If you're in the market for an NAS server, note that its data rate is capped to that of a Gigabit network connection, which is about 130MBps at most -- far less than the speed of the internal drives themselves. That said, you should focus on the capacities of the internal drives used. Also, it's a good idea to get hard drives that use less energy and are designed to work 24-7 since NAS servers are generally left on all the time. Go to the list of best five NAS servers to see my top list.

Source: http://cnet.co/QH9Zc1

WD ships 802.11ac My Net router and media bridge


WD announces the availability of its 802.11ac router and media bridge.

WD today announced the availability of its 802.11ac Wi-Fi products, including a new router and a media bridge.

WD, one of the largest hard-drive makers in the world, jumped into home networking just recently with the My Net router family, which includes the already reviewed My Net N900 HD and My Net N900 Central. The two new devices announced today complete the company's Wi-Fi portfolio by adding support for the latest 802.11ac standard.

The two new products include a 802.11ac router, the My Net AC1300 HD Dual-Band router, and a 802.1ac-compatible media bridge, the My Net AC Bridge. The media bridge helps users to experience the new 802.11ac since there are currently no hardware clients that support this new Wi-Fi standard. It can add up to four Ethernet-ready devices, such as game consoles, computers, printers, and so on, to the Wi-Fi network created by the My Net AC1300 router, or any other compatible routers.

Similar to other 802.11ac routers on the market, such as the Asus RT-AC66u or the Netgear R6300, the new My Net AC1300 is a true dual-band router and supports all existing Wi-Fi devices. When coupled with 802.11ac clients, it's able to offers the wireless speed up to 1300Mbps (or 1.3Gbps). When working with the popular wireless-N clients, it offers the Wi-Fi speed up to 450Mbps on each of its 5Ghz and 2.4Ghz frequency band, at the same time.

WD says the new router also supports a customized QoS feature called FastTrack, available in previous select My Net routers, which automatically prioritizes traffic for entertainment services, such as Netflix, YouTube, or online gaming. It's also a Gigabit router, providing fast networking for wired clients, and offers many features for home users.

The My Net AC1300 Router and My Net AC Bridge are available now and are slated to cost $190 and $150, respectively.

Source: http://cnet.co/UpVk08

Adobe Hacker Says He Used SQL Injection To Grab Database Of 150,000 User Accounts


Exposed passwords were MD5-hashed and 'easy to crack' via free cracking tools, he says.

Adobe today confirmed that one of its databases has been breached by a hacker and that it had temporarily taken offline the affected Connectusers.com website.
The attacker who claimed responsibility for the attack, meanwhile, told Dark Reading that he used a SQL injection exploit in the breach.

Adobe's confirmation of the breach came in response to a Pastebin post yesterday by the self-proclaimed Egyptian hacker who goes by "ViruS_HimA." He says he hacked into an Adobe server and dumped a database of 150,000 emails and passwords of Adobe customers and partners; affected accounts include Adobe employees, U.S. military users including U.S. Air Force users, and users from Google, NASA, universities, and other companies.

The hacker, who also goes by Adam Hima, told Dark Reading that the server he attacked was the Connectusers.com Web server, and that he exploited a SQL injection flaw to execute the attack. "It was an SQL Injection vulnerability -- somehow I was able to dump the database in less requests than normal people do," he says.

Users passwords for the Adobe Connect users site were stored and hashed with MD5, he says, which made them "easy to crack" with freely available tools. And Adobe wasn't using WAFs on the servers, he notes.

"I just want to be clear that I'm not going against Adobe or any other company. I just want to see the biggest vendors safer than this," he told Dark Reading. "Every day we see attacks targeting big companies using Exploits in Adobe, Microsoft, etc. So why don't such companies take the right security procedures to protect them customers and even themselves?"

The hacker leaked only some of the affected emails, including some from @ "adobe.com", "*.mil", and "*.gov," with a screen shot in his Pastebin post, where he first noted that his leak was because Adobe was slow to respond to vulnerability disclosures and fixes.

"Adobe is a very big company but they don't really take care of them security issues, When someone report vulnerability to them, It take 5-7 days for the notification that they've received your report!!" he wrote. "It even takes 3-4 months to patch the vulnerabilities!"

Adobe didn't provide details of how the breach occurred. Guillaume Privat, director of Adobe Connect, in a blog post this afternoon said Adobe took the Connectusers.com forum website offline last night and is working on getting passwords reset for the affected accounts, including contacting the users. Connect is Adobe's Web conferencing, presentation, online training, and desktop-sharing service. Only the user forum was affected.

"Adobe is currently investigating reports of a compromise of a Connectusers.com forum database. These reports first started circulating late during the day on Tuesday, November 13, 2012. At this point of our investigation, it appears that the Connectusers.com forum site was compromised by an unauthorized third party. It does not appear that any other Adobe services, including the Adobe Connect conferencing service itself, were impacted," Privat said.

This is the second public breach of the software firm this year. In October, Adobe revealed that an internal server with access to its digital certificate code-signing infrastructure was hacked by "sophisticated threat actors."

The attackers had created at least two malicious files that they digitally signed with a valid Adobe digital certificate. Adobe revoked the certificate and issued updates for its software signed by it, including Windows-based apps and Adobe AIR.

Tal Beery, a security researcher at Imperva, analyzed the data dump in the Connectusers Pastebin post. He found that the list appears to be valid and that the hacked database is relatively old. "I have analyzed some of the leaked data and compared some names in that leaked files against linkedin.com and found out they did work for Adobe but no longer employed there," he says. "The list include both Adobe and other companies email, which suggests that this may be a customer related" database, he says.

The Adobe hacker Hima, meanwhile, warned in his post that his next leak would be for Yahoo.

Source: http://bit.ly/ZvHVtq

New WiFi protocol boosts congested wireless network throughput by 700%


Engineers at NC State University (NCSU) have discovered a way of boosting the throughput of busy WiFi networks by up to 700%. Perhaps most importantly, the breakthrough is purely software-based, meaning it could be rolled out to existing WiFi networks relatively easily — instantly improving the throughput and latency of the network.

As wireless networking becomes ever more prevalent, you may have noticed that your home network is much faster than the WiFi network at the airport or a busy conference center. The primary reason for this is that a WiFi access point, along with every device connected to it, operates on the same wireless channel. A channel is basically a single-lane road, a lot like an electrical (copper wire) bus. Each channel, depending on the wireless technology being used, has a maximum bandwidth (say, 100 megabits per second), with that bandwidth being distributed between all connected devices.

At home, you might have exclusive use of that road, meaning you can drive as fast as you like and suck up every last megabit — but at a busy conference center, you are fighting tens or hundreds of people for space. In such a situation, your bandwidth allocation rapidly dwindles and your latency quickly climbs. This single-channel problem is also compounded by the fact that the road isn’t just one-way; the access point also needs to send data back to every connected device.

In short, WiFi networks have good throughput and low latency up until a point — and then they very quickly degrade into a horrible mess where no one can use the network properly. If the channel becomes congested enough that the access point can no longer send out data, then the show’s over, basically.

To solve this problem, NC State University has devised a scheme called WiFox. In essence, WiFox is some software that runs on a WiFi access point (i.e. it’s part of the firmware) and keeps track of the congestion level. If WiFox detects a backlog of data due to congestion, it kicks in and enables high-priority mode. In this mode, the access point gains complete control of the wireless network channel, allowing it to clear its backlog of data. Then, with the backlog clear, the network returns to normal.

We don’t have the exact details of the WiFox scheme/protocol (it’s being presented at the ACM CoNEXT conference in December), but presumably it switches between normal and high priority states very rapidly. If we use the single-lane road analogy, WiFox is basically playing the role of a traffic policeman — allowing data to flow freely in one direction for a while, and then reversing the flow. Presumably the trick is designing an algorithm that is capable of detecting congestion very rapidly, and designing a traffic cop that switches priority for exactly the right amount of time to clear the backlog.

All told, the NCSU researchers report that their testbed — a single WiFi access point with 45 connected devices — experienced a 700% increase in throughput. Exact figures aren’t given, but if we’re talking about a modern 802.11n network, we’re probably looking at a jump from around 1Mbps to around 7Mbps. Furthermore, latency is also decreased by 30-40%. There is no word when WiFox might be deployed commercially, but considering it could be rolled out as a firmware update it will hopefully be rather soon.

Source: http://bit.ly/SVUVTR

Tuesday, November 13, 2012

E-mailed malware disguised as group coupon offers on the rise


Spammers take advantage of the rising popularity of e-mailed advertisements by mimicking them and attaching viruses.

Be sure to double check that Groupon you received in your e-mail -- spammers are using the popularity of e-mailed advertisements for group discount deals to send more malware.

The rise of malware through fake e-mail advertisements and notifications are on the rise, according to a study released today by security firm Kaspersky Lab.

"They are primarily doing so by sending out malicious e-mails designed to look like official notifications. Kaspersky Lab is seeing more and more malicious spam designed to look like coupon service notifications," the report said.
The firm said it also noted these coupon spam mail in its spring report but has found that the trend is increasing. Instead of attaching viruses as files to these types of e-mails, spammers are now adding malicious links. Ads mimicking Groupon seem to be most prevalent, the firm said.

"Kaspersky Lab experts expected to see the appearance of this type of spam since coupons are very popular among Internet users and they trust coupon services," the study said. "An e-mail from a coupon service is an ideal disguise for malicious users."

In July, the firm found an e-mail that looked like a notification for a new promotion for Groupon, complete with links to the Groupon Web site. It included an attached ZIP file named Gift coupon.exe. This executable file, a file that can run an application on your computer, contained a Trojan malware program.

Kaspersky Lab is now starting to see that many of these fake advertisements no longer have attachments -- they have malicious links instead.

To avoid being duped, users should remember that coupon services never include attachments in their e-mails, and users should double check if a seemingly legitimate e-mail is actually from the service it is claiming to be. You can check this by looking at the sender name, or hovering your mouse over the links to get a preview of what URLs they're linking to.

Other types of popular spammy e-mail disguised as notifications included fake letters from hosting services, banking systems, social networks, online stores, and hotel confirmations. General spam currently makes up 71.5 percent of e-mails, with e-mails containing malicious attachments like the fake Groupon ones accounting for 3.9 percent of e-mails, according to Kaspersky Lab.

Thanks to the election, spammers also favored using President Barack Obama's name in e-mails, along with the name of his wife, Michelle Obama. The First Lady's name was actually used to add a presidential twist to the Nigerian scammers scheme. The e-mailer claimed to be Michele Obama sitting on a pile of cash at the White House and promised the recipient millions of dollars if they would just reply with their addresses, telephone numbers, and $240.

To see more examples of scams, click here. It's a real link to the report, but you might want to hover your mouse over it just in case.

Source: http://cnet.co/W6vhMR

WD® Offers Mac Users USB 3.0 Connectivity With New My Book® Studio™ External Hard Drive


4 TB Capacity; Premium Aluminum Enclosure and Hardware-Based Encryption That Protects Against Unauthorized Access to Valuable Content

IRVINE, Calif., Nov. 13, 2012 /PRNewswire/ --  WD®, a Western Digital company (NASDAQ: WDC), and a world leader in external storage and connected life solutions, today announced a new version of the My Book® Studio™ external hard drive. USB 3.0 capability is now extended to this family of My Book Studio hard drives and provides data transfer speeds up to three times faster than USB 2.0. WD has also introduced the addition of a massive 4 TB capacity in a single-drive configuration making it a perfect solution for backing up large amounts of digital content. The My Book Studio drive is designed with a premium aluminum enclosure and will be available in 1 TB, 2 TB, 3 TB and 4 TB capacities.

The My Book Studio drive's features and benefits have made it a favorite among creative professionals and Mac computer enthusiasts, including working seamlessly with Apple® Time Machine, for protecting and backing up their valuable professional and personal content. The My Book Studio external hard drive includes WD Security™, which allows users to password protect their drive along with 256-bit hardware-based encryption for added security against unauthorized access to the drive and its contents.

"WD's My Book Studio with USB 3.0 delivers extreme transfer speeds while maintaining the standard USB and FireWire connections computer users prefer," said Jim Welsh, executive vice president and general manager of WD's branded and CE products. "Its large capacity, combined with a premium aluminum enclosure, hardware-based encryption, and compatibility with Apple Time Machine, provide Mac users with a fast, secure and complete system for preserving their valuable content."

Pricing and Availability
The My Book Studio external hard drive comes with a 3-year limited warranty and is available on the WD store at www.wdstore.com and at select retailers and distributors. MSRP for My Book Studio 1 TB is $159.99 USD; the 2 TB is $189.99 USD; the 3 TB is $239.99 USD and the 4 TB is $299.99. WD will continue to offer its My Book Studio drive with USB 2.0 and FireWire® 800 connectivity for legacy systems.

About WD
WD, a Western Digital company, is a long-time innovator and storage industry leader. As a storage technology pacesetter, the company produces reliable, high-performance hard disk drives and solid state drives. These drives are deployed by OEMs and integrators in desktop and mobile computers, enterprise computing systems, embedded systems and consumer electronics applications, as well as by the company in providing its own storage products. WD's leading storage devices and systems, networking products, media players and software solutions empower people around the world to easily save, store, protect, share and experience their content on multiple devices. WD was established in 1970 and is headquartered in Irvine, California. For more information, please visit the company's website at www.wd.com.

Western Digital Corp. (NASDAQ: WDC), Irvine, Calif., is a global provider of products and services that empower people to create, manage, experience and preserve digital content. Its companies design and manufacture storage devices, networking equipment and home entertainment products under the WD, HGST and G-Technology brands. Visit the Investor section of the company's website (www.westerndigital.com) to access a variety of financial and investor information.

Western Digital, WD, the WD logo, and My Book are registered trademarks in the U.S. and other countries; My Book Studio is a trademark of Western Digital Technologies, Inc. Other marks may be mentioned herein that belong to other companies. All other brand and product names mentioned herein are the property of their respective companies. As used for storage capacity, one terabyte (TB) = one trillion bytes. Total accessible capacity varies depending on operating environment.