So far so good.
I knew the configuration of all of the disks from the raid controller setup screen (ctrl-a when booting). I had already removed the failed disk in order to put in a fresh one on which to put a new system (no free slots).
The raid controller must be smart enough because everything worked smoothly. I shut down the computer and pulled a known single-volume disk with not too much data on it (for which I had a complete backup as well) and inserted the failed disk into that slot. On boot, the controller detected the change and made a new configuration and came up just fine. I was able to mount the bad disk and copy everything off of it except for the contents of /usr/lib. This should effectively get me everything (config files, etc.) that I need to rebuild the system the way it was before. I copied the files off of the bad disk by rsyncing what I thought were the most important directories first (in case something bad should happen). After turning the computer off, re-inserting the disk that I had swapped out, and turning it back on again - the raid controller once again detected the change, did a reconfiguration and now everything looks as it did earlier today. I can see and mount all of the disks and access their data - except that now I also have a copy of everything that was on the old system disk (except for /usr/lib) from which I can (hopefully) get the system back into its pre-crash state.
Thanks.
-J
---------- Post updated at 01:57 PM ---------- Previous update was at 12:57 PM ----------
Apparently, I foolishly chose the default disk partitioning when installing the new system. Now / (slice 0) has very little space on it (6.4 GB), while the rest of the space on the disk (124 GB) is mounted as /export/home (slice 7).
So far, I've only made a few minor changes to /, and none to /export/home.
Is there any way to repartition the disk so that the whole thing is allocated to / in slice 0, or will I have to reinstall the system (again)?
User home directories are all on a separate disk anyway.
Thanks.
-J
---------- Post updated at 03:29 PM ---------- Previous update was at 01:57 PM ----------
FYI to anyone who might read this thread in the future:
I was actually able to increase the size of partition 0 to the full disk (except for the swap and boot sectors) by basically following the instructions at
https://blogs.oracle.com/michel/entr...aris_partition (modified for my purposes) and running growfs. And it didn't even bork my system! I was prepared to have to reinstall Solaris.
However, I didn't think to first remove the line from my vsftab which attempts to mount the partition that I removed. This caused problems on boot and ended up requiring another reboot.
The system is probably not properly tuned. Of course, now that I have done it, it occurs to me that maybe swap should be much larger. This is what happens when you only do system management occasionally.
-J