Four days in Irkutsk
I will be away for the next four days due to a trip to Irkutsk where I’m going to work with Sun 9990 and SF6900.
A bigger four and a smaller one
Yeah, it sounds awkward an at very least unusual but I don’t know how to express in other words that today is my son’s birthday – he has just turned four years young. Not old but young – it’s we who are getting old. What a day! But at the same time just two days ago his smaller sister stepped over the fourth month of her life. Congratulations to myself! ;-)
Solaris Live Upgrade
In the following post I’d like to go through in a step-by-step manner an upgrade process of Solaris OS. My goal is to jump from Solaris 9 right into Solaris 10 wagon and cause as minimal downtime as possible. To achieve that goal I will be using Solaris Live Upgrade facility together with Solaris flash archive to update a freshly born boot environment or BE for short.
Lets start our engines, gents. First of all, double check that 137477-01 or later patch (for SPARC) is installed which adds p7zip support to your systems. Otherwise, since Live Upgrade depends on p7zip, the upgrade process would fail.
showrev -p | grep 137477 Patch: 137477-01 Obsoletes: Requires: Incompatibles: Packages: SUNWbzip, SUNWsfman
Looks like we are good to go and could proceed with required packages installation. Here we have several options: either to use pkgadd command or have everything being done by liveupgrade20 utility but whatever route you’ll choose it’s always a good practice to remove the previous versions if there are any:
# pkgrm SUNWlucfg SUNWluu SUNWlur
And only after that it’s safe to install the new packages. Btw, keep in mind that you have to use the packages from the release you’re upgrading to.
# pkgadd -d path_to_packages SUNWlucfg SUNWlur SUNWluu
As I mentioned before, there is another option to install Live Upgrade packages using liveupgrade20 script:
# cd /path_to/Solaris_10/Tools/Installers # ./liveupgrade20 -nodisplay Solaris Web Start will assist you in installing software for Live Upgrade.
Take a heed to the following warning:
Before installing or running Live Upgrade, you are required to install a limited set of patch revisions. Make sure you have the most recently updated patch list by consulting sunsolve.sun.com. Search for the info doc 72099 on the SunSolve(tm) web site.
I prefer to just save the list of all required patches into a file and use an one-liner:
# for p in `cat patch.list | awk '{print $1}' | grep -v "^$" | cut -f1 -d\-`; do showrev -p | /usr/xpg4/bin/grep -q $p; echo "$p - $?"; done 115689 - 0 112951 - 0 113713 - 0 113280 - 0 114482 - 0 114329 - 0 114636 - 0 114006 - 0 113023 - 1 113859 - 0 137477 - 0 112966 - 0 112233 - 0 117426 - 0
Since neither of SUNWcbcp, SUNWcwbcp, SUNWhbcp, SUNWhwbcp, SUNWkbcp, or SUNWkwbcp packages are installed I could skip patch 113023 safely.
And now goes most magical part which involves lucreate command. But before creating a new BE make sure that the new home for it has been prepared and partitioned properly in accordance with your needs.
I used the following command to create a new BE:
# lucreate -c Solaris9 -n Solaris10 -C /dev/dsk/c2t0d0s0 -m /:/dev/dsk/c3t0d0s0:ufs -m /var:/dev/dsk/c3t0d0s3:ufs -m -:/dev/dsk/c3t0d0s1:swap
Let me elaborate a bit and explain what the above command actually does. Since my goal was to create a new BE I also want to have some means to distinguish the current and the new BEs from each other. To be able to do that we could assign our BEs each own name and that’s why there are -c and -n options. They tell which names should be applied to the current, -c options, and the new one, -n, boot environments. -C option deliberately points to my boot device for source BE. And finally, -m options explicitly hints lucreate what partitions, in our case it’s / and /var, we’re about to copy, which target devices it should copy to, /dev/dsk/c3t0d0s0 and /dev/dsk/c3t0d0s3 respectively, and tells it to create the file system as the UFS volumes. With swap it’s slightly different because by default all swap partitions on a UFS-based source BE are shared with a UFS-based target BE. That means that it’s up to you to decide whether or not you need an absolutely dedicated swap partition for your new BE. If the answer is affirmative, then you have to explicitly specify it, e.g. -m -:/dev/dsk/c3t0d0s1:swap. All that and more, i.e. how to merge and split partitions, create SVM mirrors, etc., is explained in lucreate(1M) man page.
One more thing to keep in mind before you’ll go and press enter – double check that all your target partitions have “wm” flag. Otherwise you will receive an error similar to this one:
Template entry /var:/dev/dsk/c3t0d0s3:ufs skipped.
luconfig: ERROR: Template filesystem definition failed for /var, all devices are not applicable..
ERROR: Configuration of boot environment failed.
When lucreate finishes it’s time to proceed with the final step – upgrading BE.
It’s wise to do a dry-run before jumping into a fray and that’s why there is an option -N in the first command.
# luupgrade -N -f -n Solaris10 -s /install/Sun/install_server/10 -a /bigfs/sun4u.Solaris_10u8.vxvm5mp3rp2_nbclient65.flar
Quick information about other options I used for luupgrade:
-f install an operating system from a Solaris Flash archive. -n BE_name Name of the BE to receive an OS installation. -s os_image_path Path name of a directory containing an OS image. This can be a directory on an installation medium such as a CD-ROM or can be an NFS or UFS directory. -a archive Path to the Solaris Flash archive when the archive is available on the local file system.
Since the dry-run finished with no errors it’s time for the real thing:
# luupgrade -f -n Solaris10 -s /install/Sun/install_server/10 -a /bigfs/sun4u.Solaris_10u8.vxvm5mp3rp2_nbclient65.flar
Once it’s done, we need to make our new BE active after the reboot. With lustatus it’s possible to check the current status of all BEs:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris9 yes yes yes no - Solaris10 yes no no yes - # luactivate -n Solaris10 A Live Upgrade Sync operation will be performed on startup of boot environment. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci@8,700000/pci@3/scsi@2/disk@0,0:a 3. Boot to the original boot environment by typing: boot ********************************************************************** Modifying boot archive service Activation of boot environment successful. # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris9 yes yes no no - Solaris10 yes no yes no -
Here is the output from the console which clearly shows what’s going on in the background.
Live Upgrade: Deactivating current boot environment
.
Live Upgrade: Executing Stop procedures for boot environment.
Live Upgrade: Current boot environment is.
Live Upgrade: New boot environment will be.
Live Upgrade: Activating boot environment.
Creating boot_archive for /.alt.tmp.b-RNc.mnt
updating /.alt.tmp.b-RNc.mnt/platform/sun4u/boot_archive
Live Upgrade: The boot device for boot environmentis
.
Live Upgrade: Activation of boot environmentcompleted.
Groovy! Let’s rock and init 6 ;-)
# cat /etc/release Solaris 10 10/09 s10s_u8wos_08a SPARC Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 16 September 2009
Brooding about upcoming Oracle hardware service changes
Just read this on the opensolaris mailing list yesterday and if you don’t follow it then this information could be of big interest. From now on forget about different types of support contract for everything that we have got used to, i.e. Platinum, Gold, Silver or Bronze options, have been left behind and get prepared to fork off 12% of you net system’s price if you still thinking about getting support from Oracle/Sun.
Oracle Hardware Service Changes
Support Options
I. Systems
Premier Support for Systems
§ Covers system hardware, OS and virtualization software
§ One level of Service
§ 7/24 with 2 hour onsite response
§ Available within 25 miles of designated metro center
§ 12% of customer’s net system price
**Upon renewal, all current Sun Spectrum hardware and system support
customer’s will be migrated to the new offering receiving upgraded service
levels
II. OS and Systems Software
Premier Support for Operating System
§ Covers Oracle; Solaris, Enterprise, Linux and Oracle VM (OVM) running on Sun
Hardware
§ 8% of customer’s net system price
Premier Support for Software (Non OS)
§ 22% of customer’s net software license value
§ There will only be one price list – Hardware Price List
§ Service pricing is based on hardware price
III. Advanced Customer Services
Packaged Services
§ Installation
§ Professional Services
§ Premier Support Qualification (Recertification)
§ Data and Device Retention (Secure Disk)
Expert Services
§ On Site Resources
§ Custom PS
Operations Management
§ Managed Services
IV. Warranty Information Effective March 16th 2010
1 year from ship date
a. Phone coverage 5×9 Monday-Friday
b. Web coverage 24×7
**Users are required to register their warranty in order to log service requests.
c. Phone Response time
i. P1 – 4 hours
ii. P2 – 8 hours
iii. P3 – next business day
d. Parts Replacement:
i. Customer Replaceable unit: parts exchange only (CRU fee no
longer applicable)
ii. Field Replaceable unit: delivered by Oracle or authorized
partner
iii. Response SLA: 2 days
e. Firmware fixes provided
V. Renewal Guidelines
i. Upon renewal, all contracts will be migrated to a one year (12
month) Premier Support contract
VI. Service Portfolio Details:
http://www.oracle.com/us/support/systems/operating-systems/index.html
http://www.oracle.com/us/support/systems/premier/index.html
http://www.oracle.com/us/support/systems/advanced-customer-services/index.html
Ominous gap in pmap’s output
I couldn’t help posting this link which is a must-read if one day you notice that there is something uncanny in the way pmap depicts memory layout of any process under Linux x86_64:
# pmap -x `echo $$` 28917: -bash Address Kbytes RSS Anon Locked Mode Mapping 0000000000400000 712 - - - r-x-- bash 00000000006b2000 40 - - - rw--- bash 00000000006bc000 20 - - - rw--- [ anon ] 00000000008bb000 32 - - - rw--- bash 0000000002d5b000 368 - - - rw--- [ anon ] 00000030ffe00000 112 - - - r-x-- ld-2.5.so 000000310001b000 4 - - - r---- ld-2.5.so 000000310001c000 4 - - - rw--- ld-2.5.so 0000003100200000 1332 - - - r-x-- libc-2.5.so 000000310034d000 2048 - - - ----- libc-2.5.so 000000310054d000 16 - - - r---- libc-2.5.so 0000003100551000 4 - - - rw--- libc-2.5.so 0000003100552000 20 - - - rw--- [ anon ] 0000003100600000 8 - - - r-x-- libdl-2.5.so 0000003100602000 2048 - - - ----- libdl-2.5.so 0000003100802000 4 - - - r---- libdl-2.5.so 0000003100803000 4 - - - rw--- libdl-2.5.so 0000003100a00000 12 - - - r-x-- libtermcap.so.2.0.8 0000003100a03000 2044 - - - ----- libtermcap.so.2.0.8 0000003100c02000 4 - - - rw--- libtermcap.so.2.0.8 00002b04ab767000 4 - - - rw--- [ anon ] 00002b04ab771000 12 - - - rw--- [ anon ] 00002b04ab774000 40 - - - r-x-- libnss_files-2.5.so 00002b04ab77e000 2044 - - - ----- libnss_files-2.5.so 00002b04ab97d000 4 - - - r---- libnss_files-2.5.so 00002b04ab97e000 4 - - - rw--- libnss_files-2.5.so 00002b04ab97f000 55100 - - - r---- locale-archive 00002b04aef4e000 28 - - - r--s- gconv-modules.cache 00002b04aef55000 4 - - - rw--- [ anon ] 00007fff393d2000 84 - - - rw--- [ stack ] ffffffffff600000 8192 - - - ----- [ anon ] ---------------- ------ ------ ------ ------ total kB 74352 - - -
Once again, if you’re scratching your head immensely and trying to understand where those gaps came from just go here and find the answer to this riddle.
Everything is in our hands
Yesterday I had a chance to be a part of Moscow OpenSolaris User Group meeting. It’s worth noting that it was two or three years ago when Moscow community gathered together the last time. I do hope that from now on these meeting will become more frequent and more fruitful.
Back to the point. The outcome and the total impression was very positive and engaging indeed. I was really pleased to hear from Jim Grisanzio who, in spite of the time shift between Moscow and Tokyo, was able to fork out some of his spare time and delivered an encouraging speech to all present.
The next part of the meeting that followed was more technical (LDOMs and ZFS) though didn’t go into deep details but still it was quite informative and raised several good points regarding each subject. During the short breaks we all had an opportunity to talk to Sun engineers and to feed our curiosity straight from the horse’s mouth.
So in the end, I enjoyed the event and warmly and with deep anticipation look forward to participate in the next events.
Moscow OpenSolaris User Group Meeting
If just by a coincidence you are in Moscow and you’re interested in OpenSolaris then the following information could feed a curiosity. And that’s because there is going to be a meeting on the 17th of March dedicated to OpenSolaris. Come and join us here.
Look forward to seeing you there.
Oracle’s filming about ZFS
If you haven’t already seen these two introductory videos that explains from the high level perspective different aspects of ZFS, i.e. deduplication and ZFS dynamic LUN expansion, then here they are available to all of us by courtesy of George Wilson and Deirdré Straughan :
ZFS Dynamic LUN expantion
ZFS Deduplication
Enjoy!
EIS-DVD, where are you?
Hope that’s just a temporal mess which will be successfully resolved soon. Frankly speaking, I could hardly imagine a Sun field engineer without EIS.
The EIS team regrets that EIS-DVD 23FEB10 has been cancelled.
This is due to purchase order approval changes for manufacturing and distribution as a result of the transfer of control to ORACLE Corporation. We have not been able to obtain approval for the manufacturing and distribution.
We do not have the infrastructure (or legal approval) to provide download capability.
It is hoped that we can return to normal schedule with EIS-DVD 30MAR10.
I do hope also, because, as far as I know, no other companies, i.e. HP or IBM, have developed anything similar to EIS methodology. It’s not only a virtual trump card but has proven over the ages to be incredibly useful and at times simply indispensable.
Runlevels in HP-UX and Solaris
In the following post I’d like to dwell upon differences between the run levels in Solaris and HP-UX.
So first, lets take a quick look at the description (man init(1M)) of the run levels which are supported in these two operating systems:
HP-UX
Run-level | Description |
---|---|
0 | Shut down HP-UX. |
S|s | Use for system administration (also known as "single-user state"). When booting into run level S at powerup, the only access to the system is through a shell spawned at the system console as the root user. The only processes running on the system will be kernel daemons started directly by the HP-UX kernel, daemon processes started from entries of type sysinit in /etc/inittab, the shell on the system console, and any processes started by the system administrator. Administration operations that require the system to be in a quiescent state (such as the fsck(1M) operation to repair a file system) should be run in this state. Transitioning into run level S from a higher run level does not terminate other system activity and does not result in a "single-user state"; this operation should not be done. |
1 | Start a subset of essential system processes. This state can also be used to perform system administration tasks. |
2 | Start most system daemons and login processes. This state is often called the "multi-user state". Login processes either at local terminals or over the network are possible. |
3 | Export filesystems and start other system processes. In this state NFS filesystems are often exported, as may be required for an NFS server. |
4 | Activate graphical presentation managers and start other system processes. |
5-6 | These states are available for user-defined operations. |
Solaris
Run-level | Description |
---|---|
0 | Go into firmware |
1 | Put the system in system administrator mode. All local file systems are mounted. Only a small set of essential kernel processes are left running. This mode is for administrative tasks such as installing optional utility packages. All files are accessible and no users are logged in on the system. |
2 | Put the system in multi-user mode. All multi-user environment terminal processes and daemons are spawned. This state is commonly referred to as the multi-user state. |
3 | Extend multi-user mode by making local resources available over the network. |
4 | Is available to be defined as an alternative multi-user environment configuration. It is not necessary for system operation and is usually not used. |
5 | Shut the machine down so that it is safe to remove the power. Have the machine remove power, if possible. |
6 | Stop the operating system and reboot to the state defined by the initdefault entry in /etc/inittab. |
S, s | Enter single-user mode. This is the only run level that doesn't require the existence of a properly formatted /etc/inittab file. If this file does not exist, then by default, the only legal run level that init can enter is the single-user mode. When in single-user mode, the filesystems required for basic system operation will be mounted. |
Q, q | Re-examine /etc/inittab. |
Apart from obvious distinctions between run levels with the same numbers, just take a closer look at run levels 5 and 6, there is a fundamental difference in a way that services are started/stopped when transition from on run-level to another takes place.
In HP-UX there is a single script /sbin/rc which acts as a general purpose sequencer invoked upon entering new run level. But what is more important is the following behavior of rc scipt:
If a transition from a lower to a higher run level (i.e., init state) occurs, the start scripts for the new run level and all intermediate levels between the old and new level are executed. If a transition from a higher to a lower run level occurs, the kill scripts for the new run level and all intermediate levels between the old and new level are executed.
Lets compare it to the way that Solaris does the same job. I’m going to concentrate on Solaris 9 leaving the latest Solaris 10 version for the last bit. Just like in HP-UX, Solaris has it’s own /etc/inittab file which controls process dispatching by init. The inittab file is composed of entries that are position dependent and have the following format:
id:rstate:action:process
ap::sysinit:/sbin/autopush -f /etc/iu.ap
ap::sysinit:/sbin/soconfig -f /etc/sock2path
fs::sysinit:/sbin/rcS sysinit >/dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog
sS:s:wait:/sbin/rcS >/dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog
But, in contrast to HP-UX, there is a single /etc/rc[S012356] script for every run level. More over, in Solaris only the scripts that pertain to the run-level we’re switching into are executed. These scripts are placed in a corresponding directories, i.e. /etc/rc0.d/ for the run level zero, /etc/rc1.d/ for the first and so on. In other words, if we’re transiting from the level S (single user mode) to the second run level then in that case one start/stop scripts from the zero and the first run levels are run by jumping right into the /etc/rc2.d/ directory where kill or K* scripts are run first following by start or S* scripts.
f [ $_INIT_PREV_LEVEL = S -o $_INIT_PREV_LEVEL = 1 ]; then echo 'The system is coming up. Please wait.' elif [ $_INIT_RUN_LEVEL = 2 ]; then echo 'Changing to state 2.' if [ -d /etc/rc2.d ]; then for f in /etc/rc2.d/K*; do if [ -s $f ]; then case $f in *.sh) . $f ;; *) /sbin/sh $f stop ;; esac fi done fi fi if [ $_INIT_PREV_LEVEL != 2 -a $_INIT_PREV_LEVEL != 3 \ -a $_INIT_PREV_LEVEL != 4 -a -d /etc/rc2.d ]; then for f in /etc/rc2.d/S*; do if [ -s $f ]; then case $f in *.sh) . $f ;; *) /sbin/sh $f start ;; esac fi done fi
The third run level is an exception though. If you take a closer look at the /etc/inittab file from Solaris you’d noticed that /sbin/rc2 is also executed when switching into the third run level.
s2:23:wait:/sbin/rc2 >/dev/msglog 2<>/dev/msglog /dev/msglog 2<>/dev/msglog
By the way, it’s worth mentioning that most Kill scripts could be found in /etc/rc0.d and /etc/rc1.d/, so please pay attention to this.
In Solaris 10 with introduction of SMF (System Management Facility) things have changed noticeably and brought substantial enhancements to the traditional UNIX start-up scripts. I’m not going to linger on that subject in great details but just want to mention that run run-levels have been replaced with the milestones and for legacy purposes startup programs in the /etc/rc?.d directories are executed as part of the corresponding run-level milestone:
/etc/rcS.d milestone/single-user:default
/etc/rc2.d milestone/multi-user:default
/etc/rc3.d milestone/multi-user-server:default
In addition, a new restarter daemon for SMF and for all services has been added to the system – svc.startd.
Quick excerpt from its man page:
svc.startd maintains service state, as well as being responsible for managing faults in accordance with the dependencies of each service. svc.startd is invoked automatically during system startup. It is restarted if any failures occur. svc.startd should never be invoked directly.
So now a good question or even a series of them could start vibrating in your mind:
- How many milestones do I actually have?
- How do I check the default run-level or a milestone?
- How could I list what services are fired up in each milestone?
- And finally, how to switch from one milestone to another?
To tell the truth, it’s all dead easy.
- Simply running the following command you could get an answer to the first question:
# svcs -a | grep milestone online Dec_11 svc:/milestone/name-services:default online Dec_11 svc:/milestone/network:default online Dec_11 svc:/milestone/devices:default online Dec_11 svc:/milestone/single-user:default online Dec_11 svc:/milestone/sysconfig:default online Dec_11 svc:/milestone/multi-user:default online Dec_11 svc:/milestone/multi-user-server:default
With “svcs -l” you could get a longer description of each service. More over, this command will also reveal its dependencies and whether they are optional or required:
# svcs -l svc:/milestone/multi-user-server:default fmri svc:/milestone/multi-user-server:default name multi-user plus exports milestone enabled true state online next_state none state_time December 11, 2009 12:20:44 PM MSK logfile /var/svc/log/milestone-multi-user-server:default.log restarter svc:/system/svc/restarter:default dependency optional_all/none svc:/system/cluster/cl-svc-enable:default (online) dependency require_all/none svc:/milestone/multi-user (online) dependency optional_all/none svc:/application/management/dmi (online) dependency optional_all/none svc:/application/management/snmpdx (online) dependency optional_all/none svc:/network/rpc/bootparams (disabled) dependency optional_all/none svc:/network/samba (disabled) dependency optional_all/none svc:/network/winbind (disabled) dependency optional_all/none svc:/network/wins (disabled) dependency optional_all/none svc:/network/nfs/server (disabled) dependency optional_all/none svc:/network/rarp (disabled) dependency optional_all/none svc:/network/dhcp-server (disabled) dependency optional_all/none svc:/network/ssh (online)
- Using svcprop command one could list all properties which belongs to a given service.
# svcprop svc:/system/svc/restarter:default general/enabled boolean true general/entity_stability astring Unstable general/single_instance boolean true restarter/auxiliary_state astring none restarter/next_state astring none restarter/state astring online restarter/state_timestamp time 1260522889.099316000 restarter/start_pid count 8 restarter/contract count 4 restarter/alt_logfile astring /etc/svc/volatile/svc.startd.log restarter/logfile astring /var/svc/log/svc.startd.log system/reconfigure boolean false options/milestone astring all tm_common_name/C ustring master\ restarter tm_man_svc_startd/manpath astring /usr/share/man tm_man_svc_startd/section astring 1M tm_man_svc_startd/title astring svc.startd
As you could see, there is a special property options/milestone that defines the milestone used as the default boot level. Allow me to quote man page once again:
Acceptable options include only the major milestones:
svc:/milestone/single-user:default
svc:/milestone/multi-user:default
svc:/milestone/multi-user-server:defaultor the special values all or none. all represents an idealized milestone that depends on every service. none is a special milestone where no services are running apart from the master svc:/system/svc/restarter:default.
By default, svc.startd uses all, a synthetic milestone that depends on every service. If this property is specified, it overrides any initdefault setting in inittab(4).To change the default milestone just run the following command:
# svccfg -s svc:/system/svc/restarter:default setprop options/milestone=svc:/milestone/multi-user-server:default # svcprop -p options/milestone svc:/system/svc/restarter:default svc:/milestone/multi-user-server:default
- Use “svcs -D” against the milestone your’re interested in to get the list of all dependents.
- And finally, to switch from one milestone to another use “svcadm milestone” command:
# svcs -a | grep milestone online Dec_11 svc:/milestone/name-services:default online Dec_11 svc:/milestone/network:default online Dec_11 svc:/milestone/devices:default online Dec_11 svc:/milestone/single-user:default online Dec_11 svc:/milestone/sysconfig:default online Dec_11 svc:/milestone/multi-user:default online 10:55:39 svc:/milestone/multi-user-server:default offline 10:55:36 svc:/system/cluster/cl-svc-cluster-milestone:default # svcadm milestone svc:/milestone/single-user:default # svcs -a | grep milestone disabled 10:57:28 svc:/milestone/multi-user-server:default disabled 10:57:28 svc:/system/cluster/cl-svc-cluster-milestone:default disabled 10:57:38 svc:/milestone/multi-user:default disabled 10:58:06 svc:/milestone/sysconfig:default disabled 10:58:06 svc:/milestone/name-services:default online Dec_11 svc:/milestone/network:default online Dec_11 svc:/milestone/devices:default online Dec_11 svc:/milestone/single-user:default
If you add “-d” option to “svcadm milestone” command then this milestone becomes the default one as well.
# svcs -D svc:/milestone/multi-user:default STATE STIME FMRI disabled Dec_11 svc:/network/dhcp-server:default disabled Dec_11 svc:/system/iscsitgt:default disabled 10:17:00 svc:/application/cde-printinfo:default disabled 10:17:00 svc:/application/graphical-login/cde-login:default disabled 10:17:01 svc:/system/vxvm/vxvm-recover:default disabled 10:17:32 svc:/application/management/common-agent-container-1:default online Dec_11 svc:/system/cluster/cl-svc-enable:default online Dec_11 svc:/milestone/multi-user-server:default
That’s it. Feel free to comment if I’ve missed anything or made a mistake.
Cheers.