It's very rare to find a NAS HA setup that isn't risky to some degree. The protocols used just don't support it. So basically a NAS HA typically boils down to running heartbeats between a master and backup head, and then probably either a
shared storage fabric or separate storage with
HAST (FreeBSD) or
DRBD (Linux) handling near-realtime live block replication to the backup unit. Then you use CARP to provide a service IP address. If a switch from master to backup is necessary, some sort of logic is necessary to cause the master to die and for the pool to be imported on the backup head, and then all connected clients suddenly find themselves not-connected to "the" server, and are expected to magically reconnect. This is the primary disaster window; will a client reconnect properly? Will it retry a transaction that might have been in progress to the master, or may have been accepted but not yet committed by the master, and therefore not on the storage pool? There are layers and layers of stuff going on.
Now in many cases, this might not matter too much. If your departmental fileserver causes everybody to reconnect and someone gets a strange error on their screen because they were saving a file, they'll probably "figure it out." If a change was lost, they will probably just curse Microsoft. But for those of us in the world of virtual machines, loss of transactions can mean much trouble later on, as VM images become corrupted without detection.
People have figured out how to do this on FreeBSD:
http://forums.freebsd.org/showthread.php?t=29639 and I'm sure Synology has a nice implementation. However, I am skeptical that it is sufficiently resilient to handle all the edge cases.