Showing posts with label zfs. Show all posts
Showing posts with label zfs. Show all posts

2009-08-29

Robin Harris surprised about Apple dropping ZFS

Robin Harris regretfully eports: "Apple kicks ZFS in the butt", and speculates why Apple did not ship ZFS as part of Snow Leopard, its latest incarnation of Mac OS X:

What did it in? Maybe it was a schedule problem - file systems require a lot of testing - and rewriting all the other bits took precedence. NIH - Not Invented Here - syndrome is another possibility. Or perhaps the uncertainty of Sun’s future led Apple to pull back.

Or maybe they just decided customers wouldn’t know enough to care, so why bother? Whatever the reason it is a major step backwards for the PC industry.


I can think of a few practical reasons myself.


For now, I'll try to focus on one: Apple's not ready for it. And perhaps, neither are the users.


Rule #1: Apple designs and sells systems that are supposed to just work. No hassle, no jumping through hoops, no bells, no whistles. Pure form, pure function.


ZFS was designed to do one thing really well. You give it your storage and your data, and it will go to extreme ends to protect your data. It will need at least two disks in order to do that.


Apple's desktop systems and notebook computers still come with only one disk inside.


Use an external disk for ZFS redundancy? The ultimate Rule #1 violation. The whole point of an external disk is that it can be disconnected. The whole point with ZFS redundancy is that you don't want to even create a hint that one of its disks could be disconnected.


After all, there is only one storage pool, and ZFS will take care of that, thank you kindly, sir. The firewire/USB/eSata cable is just the rope that the user needs. Allow them to disconnect the drive, and friendly as Mac OS is, you can provide sufficient automation to recognise that the cable was disconnected, show a kind Applely warning that "Mac OS cannot protect your data if you do not reconnect the external volume."


People are just not ready for this yet. You don't want to run ZFS with h/w that can be disconnected on a whim, or purely by accident, it's just asking for trouble. After three or four friendly warnings, people will ignore them. Yes, I know! Stop nagging me! The ease with which ZFS could recover from this will only encourage people to become careless, annoyed, or both.


ZFS will be ready for consumer use when all the volumes in a storage pool will reside together in the same device. Detachable storage is great for backups, especially with a notebook, but would have to be redundant itself. So now we're talking about at least four disks: two inside the computer, and two outside to protect against physical loss. Let's just stop there.


My conclusion is that Apple probably has taken the right decision business-wise, but I hate them for not having the hardware to support it. Maybe they will get back to it, and I look for the day when they will have notebooks and iMacs with an even numbers of disk slots.



2009-03-06

Een kandidaat voor ZFS at home?

Willem de Moor van Tweakers.net Nieuws de presentatie van de Asus Eee Station PC NAS, op Cebit deze maand.

Het doosje is wat steviger van opzet dan de gebruikelijke Home NAS appliance. Zo bevat het vier Gigabit Ethernet pporten, 2GB aan DDR2 Ram geheugen en een Intel Atom N270processor, geklokt op 1,6 GHz. Er is ruimte voor twee harde schijven in een raid0-, raid1-, of jbod-configuratie. Het prijskaartje komt op een 700 dollar.

Het systeem draait onder Linux, vanuit een 512MB Flash geheugen. Hoe veel moeite zou het kosten om dat te vervangen door OpenSolaris en ZFS?

2007-12-27

Another case for CIFS and ZFS?

Microsoft has released kb/946676, detailing a problem with Windows Home Server shared folders.

When you use certain programs to edit files on a home computer that uses Windows Home Server, the files may become corrupted when you save them to the home server.
The article warns about certain applications that are not supported with shared folders. Users should copy their files to local storage before opening them with any of the suspect applications.

That basically halves the functionality of WHS, which is being touted as a NAS/backup appliance.


I wonder though, whether the problem described here is inherent to the way windows applications use their datafiles. The typical approach I remember is that applications do live updates to the original files, after making a temporary backup file. This in contrast to the traditional unix way of life, where you first create a working copy and when done use that to replace the original.

I really should build my own home NAS based on Solaris/CIFS server and ZFS next year. Let's see what kind of budget I have...



Update: It appears to be a reliability issue under heavy load, and was hard to reproduce. I claim it would not happen with NFS. That's why NFS (and any NAS) write performance can suffer badly, if you don't have the hardware to help it along.

Links:
mswhs.com UK fansite
Computerworld article

2007-02-17

Opensolaris zfs + dtrace guides available in pt_BR translation

A few opensolaris enthousiasts have translated the guides to dtrace and zfs
into Brasilian Portuguese.

OpenSolaris i18n Forums: pt_BR translation zfs + dtrace guides available for review

I'm pleased to announce that the Brazilian Portuguese SGML & PDF version of the following book are now available in the Download Center:

Solaris Dynamic Tracing Guide ( Solaris 10 3/05 : SGML & PDF)
Solaris ZFS Administration Guide ( Solaris 10 11/06 : SGML & PDF)

Note to self: I need to share this with our DBA's...

2007-01-28

More Adaptive Replacement Cache Algorithm

Based on a conversation during the recent nlosug meeting, I've updated the wikipedia article for the ARC with a better explanation of the algorithm. The language is now more tangible, and the terms used are closer to the original literature.





2007-01-20

Adaptive Replacement Cache in ZFS

Last week, I could not reach the OpenSolaris source browser. I was looking for an explanation of what is called the 'ARC', or Adaptive Replacement Cache, in ZFS.

In contrast to the venerable UFS and NFS, ZFS does not use the normal solaris VM subsystem for its page cache. Instead, pages are mapped into the kernel address space, and managed by the ARC.

Looking through the zfs-discuss archives, I did not find any explanation of the ARC, except for references to the solaris Architecture Council, which is useful enough in itself, but does not deal specifically with paging algoritms...

Googling around, I finally found some useful references: Roch Bourbonnais explains the acronym, and refer to the IBM Almaden research lab, where the Adaptive Replacement Cache algorithm was developed.

In the original IBM version, it uses a cache directory twice as large as needed for the cache size. The extra space is used to keep track of recently evicted entries, so we know if a cache miss actually refers to
a recently used page or not.

After I created the wiki entry I came up with this visualisation of the cache directory:

. . . [1 hit,evicted <-[1 hit, in cache <-|-> 2 hits, in cache]-> 2 hits, evicted] . . .

and the following for a modification in Solaris ZFS, which knows in advance that it
should not throw out certain pages:

. . . [1 hit,evicted <-[1 hit, in cache <-|non-evictable|-> 2 hits, in cache]-> 2 hits, evicted] . . .

The inner brackets represent the actual cache, while the outer brackets show the virtual directory, referring to evicted entries. The total size for the cache is of course fixed, but it moves freely between the outer brackets. In addition, the divider in the middle can also move around, favouring recent or frequent hits.

Because the cache is mapped into kernel memory, this puts quite some stress on 32bit (x86) systems, as the 4GB address space on that architecture is shared by kernel and user space. Space used by the cache limits the size of user processes. Don't run your DBMS on one of these.

Links:
Wikipedia: Adaptive_Replacement_Cache

2006-09-01

server3.fastmail.fm has been down all day

I can't get to my private mail.

Otherwise, the fastmail.fm service is pretty nice and reliable. Not today.

A typical case of Fsck You
Time to switch to Solaris/ZFS