There was an error in this gadget

Tuesday, December 18, 2012

Apple targets Wi-Fi trouble with EFI firmware updates for 2012 Macs


The latest round of Apple EFI firmware updates tackles Thunderbolt, sleep, and Wi-Fi issues.

Apple has released three EFI firmware updates for some of its Mac systems that were released in 2012, which tackle a number of issues pertaining to sleep, Thunderbolt performance, and -- more relevantly to many users -- reliability of Wi-Fi connectivity.

The first update is a Wi-Fi update for all late 2012 Mac systems that improves compatibility with 5GHz-band Wi-Fi signals.

The update includes a new version of the AirPortBrcm4311.kext kernel extension. This update is specific for those running OS X 10.8.2 build 12C2034 (you can look this up by clicking the version number of OS X in the About This Mac dialog box in the Apple menu.

The second update is for 13-inch MacBook Pro models, and it's supposed to improve sleep performance and Thunderbolt router support, and addresses a problem with HDMI display output. This update also includes the 5GHz Wi-Fi band compatibility fix. The last update contains similar fixes, but for iMac systems.

The updates should be available through Apple's Software Update service, but can also be downloaded from the following locations:


As with any firmware updates, be sure to have your system attached to a reliable power supply and follow the onscreen instructions for installing. Do not interrupt the installation, even if it takes a few minutes to restart and install, and be sure to fully back up your system before applying this or any other update to your system.

Tuesday, December 11, 2012

Top 5 internal drives of 2012: Your system deserves a worthy upgrade


No matter how big or small your computer is, there is at least one internal drive to host its operating system and programs. This drive is almost always a standard internal drive, which comes in the form of a regular hard drive (HDD) or a solid-state drive (SSD). The former is affordable and offers lots of storage space, while the latter is generally more expensive but superfast. The good news is, 2012 marked the time SSDs became more and more popular, thanks to the reduced costs and the increased number of vendors joining the storage market of this type.

That said, if you have a computer that uses a hard drive as the main internal storage unit, you definitely want to replace it with an SSD. This is the single upgrade that will bring a totally new life to your computer.

If you have a desktop, however, it's better to use an SSD as the main drive that hosts the operating system and a fast hard drive as a secondary drive to host data. This way you have the best of both performance and storage space.

If you want to find out more about digital storage, don't forget to check out my series on the basics. Those who want to have a quick pick for the holidays, among all internal drives I reviewed during 2012, here are the best five of them. Any of these drives will serve your system well, so the list is sorted based on the review order.

OCZ Vector
The OCZ Vector is the latest drive from OCZ and is the first drive made entirely by OCZ itself, from the controller to the flash memory. The result is something quite impressive. In my testing, it's arguably the fastest consumer-grade SSD to date. Coming in the ultrathin (7mm) 2.5-inch design and shipped with a 3.5-inch drive bay converter, the Vector works with all standard systems, from desktops to ultrabooks.

In my opinion, it's best used with a desktop, however, since it's not a drive with the best energy efficiency. For that you want to check out the Samsung 840 Pro below.

Samsung 840 Pro
The Samsung 840 Pro is an upgrade of the already-excellent Samsung 830. The new drive shares exactly the same design as its predecessor, coming in the 7mm-thin 2.5-inch design. On the inside, however, it uses a new controller and toggle-mode NAND flash memory to offer a much better combination of performance and energy efficiency. In fact it's for now the most energy-efficient on the market, with just .068W(working)/.042W (idle) consumption rating. For this reason, the new Samsung is best suited for laptops or ultrabooks.

Corsair Neutron GTX
The Corsair Neutron GTX is the first SSD from Corsair that I've worked with. Despite sharing the increasingly popular 7mm, 2.5-inch design, the drive's quite different from the rest of SSDs since it uses a new controller called LAMD LM87800 and a high-performance toggle-mode NAND from Toshiba. This resulted in one of the best performances I've seen. The good news is that getting it won't break the bank. The new Corsair drive is priced at around $1 per gigabyte, and the price is expected to get lower soon.

Plextor M5 Pro
The Plextor M5 Pro is one of the fastest SSDs on the market and the first SSD from Plextor that supports the new 7mm, 2.5-inch design. Similar to the Corsair drives above, it also comes with a new controller, the Marvell 88SS9187 Monet, that provides enterprise-grade double data protection. The drive also comes with friendly pricing, costing less than $1 per gigabyte.

WD VelociRaptor WD1000DHTZ
The WD VelociRaptor WD1000DHTZ is the only standard hard drive on this list and it makes it here because it's one of a kind.

Unlike the rest of the consumer-grade hard drives on the market, the VelociRaptor spins at 10,000rpm (as opposed to 7,200rpm in other high-speed hard drive) and offered very fast performance in my testing. While the drive can't compare with SSDs in terms of boot/shutdown times, it offers comparable data copy rates, even faster when you use two units in a RAID 0. This means the drive makes an excellent secondary hard drive for your high-end system where the main drive, which hosts the operating system, is an SSD. That said, you can also use it as a main drive in a desktop and expect quite an improvement compared with other hard drives.

Those who do a lot of data manipulation work, such as movie editing, would benefit a lot from the VelociRaptor since, like all hard drives, it doesn't suffer from the limited program/erase cycles found in all SSDs.

Source: http://cnet.co/YUKWpC

Is Time Machine really backing up your drives?


Time Machine should by default back up all internal hard drives on a Mac; however, some people are experiencing a problem in which drives appear to be automatically excluded from being backed up.

A new report is out that suggests a fault may exist in Apple's Time Machine service, causing internal drives to be automatically and silently added to Time Machine's exclusion list, resulting in the service not backing up the data on these drives and not notifying the user of the change.

As outlined on Diglloyd's Mac Performance Guide blog, this problem appears to be situational to setups where many internal drives are being managed. To see this happen, be sure one of your internal drives is mounted and available, and then add it to the Time Machine exclusion list and restart your computer. After the system boots, reopen the Time Machine exclusion list and then unmount the drive you previously excluded.

When performing these steps, those experiencing this issue will see the drive disappear from the exclusion list as expected but then be replaced by another mounted drive in the system that is subsequently excluded from backups. This unintended addition to the exclusion list will result in this new drive's data not being backed up.

It is important to not confuse this behavior with Time Machine's default behavior for handing external USB and FireWire drives. Time Machine is built to exclude external drives by default, so only the Mac's internal drives are backed up and not every USB thumbdrive you use is copied to your backups.

If indeed a bug, this issue appears to only be a problem for those with rather elaborate drive setups, with heavily partitioned multiple internal drives. In most cases Mac users have one or perhaps two internal drives on their systems that are handled by Time Machine quite well.

Additionally, this bug may be more specific to individual setups instead of being a problem experienced by all Mac users. While some readers have confirmed seeing the problem that Diglloyd outlined, others with similar setups have not seen this specific problem. Though Diglloyd claims it has been around since OS X Lion, this inconsistency makes it difficult to find a single cause of the problem. However, one possibility may lie in how Time Machine identifies drives to include or exclude.

When handling individual files and folders Time Machine will exclude by file path; however, when managing volumes it does so by using their UUIDs, which may be the source of the problem for those experiencing this bug. The UUID for a volume ought to be a unique number that is generated when it is formatted, and is based on the drive's properties, but if for some reason the UUID is blank (all zeros) or otherwise matches that of another drive (after cloning one volume to another), then while the system may still use it, services that rely on it for identifying the drive may have problems. It is possible that Time Machine could confuse two similar UUIDs in this manner, especially if multiple utilities and operating systems have been used to manage partitions on a system's internal drives.

To check the UUIDs of the volumes on the system, open Disk Utility followed by getting information on each mounted volume. In the window that appears you will find data and statistics on the drive, with one entry being the UUID. Compare these between your various volumes to make sure they are unique.

Additionally, this problem may be rooted in corruption in the Time Machine preferences file, which holds all of the volume configuration information for the service. Corrupt preferences is a common reason why programs and services stop working properly, and removing the preferences so they will be rebuilt from scratch is an easy and recommended remedy. To do this for Time Machine, first go to the Time Machine system preferences and make a note of the backup drives used and the list of excluded files. Screenshots are an easy way to do this.

Then open the /Macintosh HD/Library/Preferences/ folder and remove the file "com.apple.TimeMachine.plist" and restart your computer. This will cause the Time Machine service to relaunch and recreate its default preferences file. After doing this you can refer to your notes on your previous Time Machine configuration and add the destination drives and exclusion list items again accordingly.

Even if not everyone is affected by this bug, it does serve to remind us that there may be odd quirks with any backup system, so it is always best to regularly check your backup routines, and consider using multiple approaches to your backups (for example, drive cloning in addition to Time Machine). In addition to making sure your backup services are set up correctly, be sure to check the destination drive or drives themselves to make sure they are not experiencing any errors, by using Disk Utility to run a format and partition table verification.

These options are especially key as your storage setups expand and get more complex. Often people start with a single drive and then slowly add more storage and migrate their data to larger setups (both internal and external), and over time can build quite elaborate drive setups. As this happens, making sure the data gets properly managed with whatever backup approaches are being used becomes more important.

Tuesday, December 4, 2012

Flash memory made immortal by fiery heat


Macronix's 'Thermal annealing' process extends SSD life from 10k to 100m read/write cycles

Taiwanese flash memory non-volatile memory manufacturer Macronix is set to reveal technologies is says will make it possible for flash memory to survive 100 million read/write cycles.

Today's flash memory wears out over time, because moving electrons in and out of transistors wears them down. That problem has led manufacturers to implement “wear levelling” algorithms that ensure data is not always written to the same regions of a disk. By sharing the work around, wear levelling helps flash disks to age gracefully, instead of frequently-used regions dying young.

The likes of Intel and Samsung have, of late, been keen to point out that their wear levelling techniques are now so advanced that disks they make will likely survive all-but the most extended working lives.

But Macronix's innovations, which it will reveal at the 2012 International Electron Devices Meeting in San Francisco on December 11th, look like making wear levelling irrelevant.

Macronix has signed up to deliver a talk titled “9.1 Radically Extending the Cycling Endurance of Flash Memory (to > 100M Cycles) by Using Built-in Thermal Annealing to Self-heal the Stress-Induced Damage” at the event. The summary for the talk explains that “Flash memory endurance is limited by the tunnel oxide degradation after repeated P/E stressing in strong electric field.”

“Thermal annealing should be able to repair the oxide damage,” the summary continues, “but such theory cannot be tested in real time since completed device cannot endure high temperature > 400°C and long baking time is impractical for real time operation.”

Macronix's breakthrough is described as “A novel self-healing Flash, where a locally high temperature (>800°C), short time (ms) annealing is generated by a built-in heater.” This apparatus performs so well that the company says its tests reached 100 million read/write cycles without any signs of trouble ... and that longer life looks possible but they ran out of time to run more tests.

A happy by-product of the process is faster erase times, giving flash yet another speed advantage over spinning rust.

One of the researchers on the project,  Hang‑Ting Lue, has told IEEE Spectrum that the amount of energy required to heat flash in this way is sufficiently small that it will be suitable for use in mobile devices. He's also said the technique has the Macronix intends to commercialise the technology, but of course it's too early to suggest a timeframe for its arrival in mass-market products.

Source: http://bit.ly/VeJAOw

MySQL gains new batch of vulns


Overruns, privileges, DoS and more

A series of posts on ExploitDB by an author signing as “King Cope” reveal a new set of MySQL vulnerabilities – along with one issue that could just be a configuration issue.

The vulnerabilities, which emerged on Saturday, include a denial-of-service demonstration, a Windows remote root attack, two overrun attacks that work on Linux, and one privilege escalation attack, also on Linux.

The overflow bugs crash the MySQL daemon, allowing the attacker to then execute commands with the same privileges as the user running MySQL. “King Cope” also demonstrated a user enumeration vulnerability.

The privilege escalation vulnerability, in which an attacker could escalate themselves to the same file permissions as the MySQL administrative user, has provoked some to-and-fro on the Full Disclosure mailing list, with one writer stating that “CVE-2012-5613 is not a bug, but a result of a misconfiguration, much like an anonymous ftp upload access to the $HOME of the ftp user.”

Red Hat has assigned CVEs to the vulnerabilities, but at the time of writing, Oracle has not commented on the issues.

Source: http://bit.ly/Vs7trM

AMD finishes its 'Piledriver' Opteron server chip rollout


Looking ahead to 'Steamroller' and 'Excavator', presumably

AMD has had a wrenching couple of years, and its executives are wrestling with so many transitions in the processor market and inside AMD that they are just punting out the new Opteron 4300 and 3300 CPUs for entry servers without making a fuss with the press or analyst communities.

No briefings, no fuss, no muss, here's the press release and the die shot, and we'll see ya next year.

It's an unconventional approach, but El Reg will get you the information about the new chips and do a little analysis, just the same.

The new Opteron 4300 and 3300 processors are based on the same "Piledriver" cores that are the guts of the "Trinity" Fusion A Series APUs announced in May, the FX Series of enthusiast APUs that came out in October, and the "Abu Dhabi" Opteron 6300 high-end server chips that launched in November.

The original plan before Rory Read took over as CEO was to push the Opteron 4300 processor up to ten cores with a chip known as "Sepang" and fitting in a new processor socket that was a successor to the C32 socket used with the Opteron 4000s, 4100s, and 4200s. An Opteron 6300 was then going to be two of these Sepang chips jammed into a single ceramic package, side by side, for a total of 20 cores in a single socket.

The problem was that the Opteron 6300 socket was not going to be compatible with the G34 socket. With so few server makers standing behind the Opterons (relative to their peak enthusiasm five or six years ago), a socket change would have been bad for AMD – particularly coming at a time when GlobalFoundries was having trouble ramping its 32 nanometer processes.

And so in November 2011, AMD scrapped that plan and decided instead to just focus on making the Piledriver cores do more work and get modest clock speed increases. In February of this year, AMD publicly copped to this changed plan, giving us the eight-core "Seoul" Opteron 4300 and the "Delhi" Opteron 3300, as well as the aforementioned Opteron 6300 that is already out there.

The Piledriver cores have four new instructions and a bunch of tweaks to goose the performance of the dual-core module compared to the first-generation "Bulldozer" module. The new instructions include FMA3 (floating point fused multiply add), BMI (bit manipulation instruction), TBM (trailing bit manipulation), and F16c (for half-precision 16-bit floating point math).

As we discussed at length with the Opteron 6300 launch, the branch predictors, schedulers, load/store units, and data prefetchers have all been tweaked to run better, and the memory controller had its top memory speed goosed from 1.6GHz to 1.87GHz.

Add up all of the changes with the Piledriver cores, you get 7 to 8 per cent improvement in instructions per cycle, plus slightly faster memory and slightly higher clock speeds.

The Opteron 4300 has eight cores on a die, and depending on the model, it has either six or eight of those cores activated. It is aimed at both unisocket and dual-socket machines, somewhere between a high-end Xeon E3 and a low-end Xeon E5 in the Intel x86 server chip lineup.

The memory controller on the Opteron 4300 supports ultra-low-voltage DDR3 main memory that runs at 1.25 volts, as well as regular 1.5-volt and low-voltage 1.35-volt memory sticks. The processor supports up six memory slots per C32 socket and two memory channels per DIMM for a maximum capacity of 192GB of memory per socket.

The Opteron 4300 has two x16 HyperTransport 3 (HT3) point-to-point links running at 6.4GT/sec linking the two processors in a dual-socket machine together as well as to the chipset and peripherals in the server.

AMD didn't just do a global replace of the Bulldozer cores with the Piledriver cores to make the Opteron 4300s. It made a few changes to the lineup.

For many years, AMD has been shipping four different styles of Opterons. The plain vanilla ones run at the standard voltage and have the standard thermal profiles. The Special Editions, or SEs, run hotter and clock higher and deliver the highest performance, but they are also wickedly expensive and impossible to put into dense servers. The Highly Efficient, or HEs, are a bin sort to find chips that run at significantly lower voltages with slightly lower clock speeds compared to the standard parts, and the Extremely Efficient or EE parts run are a deep bin sort to find parts that run at even lower voltages and lower clock speeds. but which have very low thermals.

The Opteron 6000s series come in SE, regular, and HE variants, while the Opteron 4000s and now Opteron 3000s come in standard, EE, and HE variants.

The interesting thing about the Opteron 4300s is the EE part. With the Opteron 4200, AMD offered an eight-core processor that ran at 1.6GHz with a Turbo Core boost speed of 2.8GHz with a 35-watt thermal envelope; it cost $377. With the Opteron 4300, AMD has down shifted the EE part to only four active cores, but goosed the base clock speed to 2.2GHz and the turbo core speed to 3.0GHz while still staying in that 35-watt thermal envelope.

The other thing to notice is that there are only six Opteron 4300s compared to ten Opteron 4200s, but that doesn't mean much. There are two eight-core parts and three six-core variants, and there are fewer standard and HE SKUs. AMD could add more SKUs in spring 2013, and probably will in February or March when Intel is readying its "Haswell" Xeon E3 chips.

SKU for SKU, the new Opteron 4300s offer the same or around 3 per cent more clock speed than the Opteron 4200s they are most like in the product line, except for that radically different EE part; those chips cost around 10 per cent more. When you add the instruction-level performance to the clock speed gains, you get a chip that has about 10 per cent more oomph for the same increase in cost. Add in compiler tweaks and you can push performance gains up by as much as 15 per cent, says AMD.

This is not the kind of thing that will cause companies to ditch Xeons for Opterons, but by the same token this is probably sufficient to keep Opteron customers who have invested in particular servers adding the new chips to their machines. AMD needs something more dramatic than this to shake up the x86 server biz, and for now it looks like the company is content to have us all wondering what its Opteron ARM plans are.

Made with microservers in mind
The Opteron 3300s, like their predecessors the Opteron 3200s, fit in the AM3+ socket, which is a 942-pin socket that is not precisely compatible with the prior 941-pin AM3 socket and even less so with 940-pin AM2 and AM2+ sockets. (This is not to be confused with the original Socket 940 socket for the Opteron chips from way back when.) What matters to microserver customers is that any machine they have that used an Opteron 3200 can use an Opteron 3300.

There are three models of the Opteron 3300s, one with eight cores and two with four cores, just like with the Opteron 3200s. In general, the base and turbo clock speeds are up by 100MHz to 200MHz and the prices are the same for the top two parts. The Opteron 3300s are aimed at single-socket servers only, and support up to four memory sticks for that socket, with two memory channels. The Opteron 3300 has one x16 HT3 link running at 5.2GT/sec.

The big difference this time around is that there is an HE and an EE part instead of two HE parts. The low-end Opteron 3320 EE is quite a bit different from its predecessor. For one thing, its clock speed is a lot lower, down to 1.9GHz from 2.5GHz, and its thermals have similarly taken a big dive, down to 25 watts peak from 45 watts with the Opteron 3250 HE part. That is what happens when you drop the voltage and the clocks at the same time.

This Opteron 3320 EE chip is clearly aimed at Intel's two-core, four-thread Xeon E3-1220L v2 processor, which fits in a 17-watt thermal envelope and which costs $189.

The low-end Opteron 3300 is perhaps being positioned to compete with the forthcoming "Centerton" Atom processor, as well, which is expected before year's end. Intel is shooting for that to be a 6-watt part, and that means a four-core Opteron 3320 EE has to do four times the work of a Centerton Atom at the same price to compete. We'll see in a few weeks.

What is also interesting is that AMD was pitching the $99 Opteron 3250HE chip at low-end microservers with the goal of helping service providers put together cheap minimalist boxes for hosting (not virtualization, but bare-metal hosting). The Opteron 3320 EE is going to have less performance than its processor – probably somewhere around 20 per cent less is our guess – and yet it costs nearly twice as much.

The thing is, you can get down into a 25-watt power envelope, and AMD clearly thinks it can charge a premium for a "real" x86 processor down in that range. The hosters will still be able to get the older Opteron 3250 HE if they want it, of course.

Now would be a good time for AMD to start telling people about its real plans for future "Steamroller" and "Excavator" Opterons. The engineers had better come up with something good that GlobalFoundries can actually make on time.

SeaMicro servers and Opteron ARM chips cannot save AMD's Opteron business. It has to fight Intel – and win.

Source: http://bit.ly/UcvbBE

New Mac malware uses OS X launch services


A new malware for OS X also acts as a reminder to monitor the launch services in OS X as a security precaution.

Security company Intego is reporting the discovery of a new malware package for OS X. The package is a Trojan horse called OSX/Dockster.A, that appears to have keylogging features to record what is being typed on an infected system in addition to remote-access features for backdoor access into the system. When installed, the Trojan attempts to contact the server "itsec.eicp.net," likely to receive instructions for allowing remote access to the system.

As with other recent malware for OS X, Dockster is a Java-based threat that will not run unless you have Java installed on your system. It also currently uses the patched CVE-2012-0507 vulnerability in Java that was used by the Flashback malware in OS X, and appears to be in a testing phase. As a result, this Trojan is a minimal threat; however, its mode of infection offers a reminder to OS X users that simply monitoring the system launcher configurations on your Mac can be an easy way to determine if malware or other unwanted software is being installed on your computer.

As with other OS X malware, this new Trojan utilizes launch agents, which are small configuration files that tell the launcher processes in the system (one that runs globally and another that runs for each log-in session) to automatically and conditionally start or stop various background routines. To do so, a developer simply has to create a properly formatted configuration file and drop it into one of the folders monitored by the launcher process. After doing this, the next time the system is restarted or if you log out and log back in, the task will load and run.

The default folders the launcher uses are called "LaunchDaemons" and "LaunchAgents," and are located directly inside the Library folders either for the system, for all users, or for individual users. While a Trojan or other malware will need administrative access (e.g., prompt for a password) to install a configuration file in global resources such as the system folder or in the library folder for all users, it can write to the LaunchAgents folder for your personal account without prompting you, resulting in a targeted process or routine running when you log into your account.

In this case, a launch agent file called "mac.Dockset.deman" is created in the user's account; it has the system launch the Trojan at log-in, and when run will load an executable that appears as ".Dockset" in Activity Monitor.

While the use of these launcher configuration files makes it easy for malware developers to have programs launch automatically, it also makes it easy to detect this malicious behavior. By setting up a folder monitoring service, you can have the system notify you if a file has been added to these folders, so you can check it out and further investigate its origin and function.

While you can write custom scripts and programs to do this, Apple does have all the components necessary for monitoring a folder built into OS X. In order to run Applescript routines when a folder's contents are changed, Apple provides a service called Folder Actions that can be set up to monitor folders on the file system. This service can be used to bind some built-in scripts Apple provides as examples of AppleScript functions to a folder in order to monitor the contents of the various launch agent and launch daemon folders in the system, and prompt you with a warning whenever a file is added or removed.

AppleScript is relatively behind the scenes in OS X, so setting this service does take a couple of steps to complete, but everything you need to do it is available on the system, making it a relatively painless process. I recently outlined how to use this service to monitor launch agent folders, which I recommend all Mac users do for their systems to ensure they are aware of what is being configured to run automatically on their systems.

Source: http://cnet.co/TyBTDd

New Mac malware spreading from Dalai Lama tribute site


"Dockster" takes advantage of the same vulnerability exploited by the "Flashback" malware, which infected more than 600,000 computers.

A new piece of Mac malware has been discovered on a Web site linked to the Dalai Lama, using a well-documented Java exploit to install a Trojan on visitors' computers and steal personal information.

Dubbed "Dockster," the malware was found lurking on Gyalwarinpoche.com, according to security research firm F-Secure. The malware takes advantage of the same vulnerability exploited by the "Flashback" malware to install a basic backdoor that allows the attacker to download files and log keystrokes.

Although "Dockster" leverages an exploit that has already been patched, computers not updated or running older software may still be at risk. F-Secure notes that this is not the first time Gyalwarinpoche.com has been compromised and warns that Mac users aren't the only ones who should avoid visiting the site; Windows malware has also been detected on it.

At its height, the original Flashback, which was designed to grab passwords and other information from users through their Web browser and other applications, was estimated to be infecting more than 600,000 Macs. The original malware, first detected in fall 2011, typically installed itself after a user mistook it for a legitimate browser plug-in while visiting a malicious Web site. The malware would then collect personal information and send it back to remote servers.

Source: http://cnet.co/QDliRE