Following up this post, I've decided to go ahead and post the current versions of all my scripts. I should explain a little about my environment. I have the VM's setup cloned off a vanilla* CentOS 6.3 64-bit installation. They each use DHCP to get their IP addresses from a DHCP server that has reservations to they always use the same IP addresses†. On the same network I have a web server that hosts the installers for CAPC, IMDA, & IMDR. So, I've written scripts that will download the installers, run all the pre-requisites and checks and run the installers. The scripts are here, here, here, here, here, & here. They are not up to date with the current installer. They will need to be modified to work properly with the current installer.
Here's how I use them:
DLAll.sh - this is not really a script, just a text file containing the bootstrap commands to kick off all the other scripts. I copy and paste from this script into the ssh session. Each section should be clearly commented as to what it does. The password to get into my server is masked, but it wouldn't work on your network unless your server were a clone of mine anyway. Modify as you wish. I couldn't get the PC to talk to the DA without disabling the firewalls, so that's at the top. I usually disable the firewalls on all of them and will probably build that configuration into my base image in the future.
InstallDR1.sh - this script gets everything ready for the Data Repository installation. These are basically all the commands that are run as root.
InstallDR2.sh - this script contains the commands that the dradmin user needs to execute. This one requires manual intervention at 4 places, so I've documented what I do at each stop.
InstallDA.sh - this one installs the Data Aggregator
InstallDC.sh - this one installs the Data Collector
InstallPC.sh - this one installs Performance Center
I think that might be all. So i'm going to leave it at that for now. If you've got any suggestions on how the scripts could be improved, let me know.
*Vanilla=updates applied and startup mode changed to console only
†This may be the source of other problems since the linux host file doesn't actually have the IP address in it.
I'm an engineer who doesn't care for a lot of fluff for fluff's sake.
Wednesday, September 26, 2012
Thursday, September 20, 2012
Installing CAPC on vanilla CentOS 6.3
Since IM 2.0 came out, I've been testing the installation of the various components on CentOS 6.3. (No your're not seeing things, my last post was going to be this post, but I went on a rant instead.) I've come up with a few scripts that I use to get everything configured. The first script was actually to install the data aggregator and repository. It's not really a script, since some parts can't be scripted. However, the parts that can be scripted have been. I just copy and paste the commands into a putty window.
Anyway, the CAPC installation can be completely automated. This is on a vanilla CentOS 6.3 64-bit installation. The only change I made was to change the /etc/inittab to boot to the console instead of the GUI (change the 5 to a 3) and reboot. I don't bother setting static IP addresses in my lab. I use DHCP reservations. It makes it a snap to configure. I connect to the server via putty and paste in the following commands:
There you go. I'll post the other scripts in separate posts since they're a bit more involved.
Anyway, the CAPC installation can be completely automated. This is on a vanilla CentOS 6.3 64-bit installation. The only change I made was to change the /etc/inittab to boot to the console instead of the GUI (change the 5 to a 3) and reboot. I don't bother setting static IP addresses in my lab. I use DHCP reservations. It makes it a snap to configure. I connect to the server via putty and paste in the following commands:
There you go. I'll post the other scripts in separate posts since they're a bit more involved.
IM 2.0 on Linux vs. Windows
With the release of IM 2.0, I've been testing the installation of the various components on CentOS because my lab's investor (my wife) doesn't see the need to purchase a RedHat license. All the better anyway since others might want to know if CentOS is an option for installation to save on adoption costs. Frankly, I'm not sure why CA decided to go with RHEL.
While it is probably the most popular Linux server operating system, all (I repeat, ALL) of the previous NetQoS software ran on Windows. I'm not counting MTP since it's sold as an appliance not software. The target audience for the original NetQoS products was the network engineer. It has since bloomed to include application owners and server administrators. However, if you look at the role of the person who normally administers and champions the NetQoS products, it's still a network engineer.
It is my opinion that network engineers are most familiar with two operating systems: Cisco IOS and Windows. There will be that case where the network engineer used to be on the server team but is now working on the network side. While this obviously happens, I think there are just as many server-turned-network engineers who come from Linux/mixed (Windows & Linux, come on even Linux only environments have Exchange) environments as come from Windows only environments. So, my conclusion is that most network engineers will be most familiar with Cisco IOS and Windows (both from server OS and desktop OS experience). IM 2.0 should have been released on Windows.
There is another possible reason to use Linux over Windows: speed. I agree with this argument. Even with CentOS, I can turn off the GUI and save the resources that would otherwise be dedicated to display a locked screen 99.999% of the time. However, the minimum RAM requirement for IM 2.0 is 4GB. What!? I thought Linux was a better performer and could get away with not having as much RAM. Well, it turns out that even in a lab environment when monitoring a very small infrastructure, 3GB isn't always enough. The fact that I installed DA/DR on a box with only 1GB was pointed to as a possible reason why I was seeing problems on my installation. Wait guys, if i have to dedicate a ton of resources, why don't we just run it on Windows?
Wasn't IM 2.0 supposed to be developed on Java? If that's the case, why does the OS even matter? Shouldn't it be a fairly trivial matter to compile installers for all of the major operating systems?
I'm not a developer, so you really shouldn't be reading any of this without your tough over in your cheek. But still.
Really?
I have to learn Linux?
Really?
I have to purchase RHEL?
Really?
I have to dedicate at least 4GB of RAM in a lab environment?
Really?
While it is probably the most popular Linux server operating system, all (I repeat, ALL) of the previous NetQoS software ran on Windows. I'm not counting MTP since it's sold as an appliance not software. The target audience for the original NetQoS products was the network engineer. It has since bloomed to include application owners and server administrators. However, if you look at the role of the person who normally administers and champions the NetQoS products, it's still a network engineer.
It is my opinion that network engineers are most familiar with two operating systems: Cisco IOS and Windows. There will be that case where the network engineer used to be on the server team but is now working on the network side. While this obviously happens, I think there are just as many server-turned-network engineers who come from Linux/mixed (Windows & Linux, come on even Linux only environments have Exchange) environments as come from Windows only environments. So, my conclusion is that most network engineers will be most familiar with Cisco IOS and Windows (both from server OS and desktop OS experience). IM 2.0 should have been released on Windows.
There is another possible reason to use Linux over Windows: speed. I agree with this argument. Even with CentOS, I can turn off the GUI and save the resources that would otherwise be dedicated to display a locked screen 99.999% of the time. However, the minimum RAM requirement for IM 2.0 is 4GB. What!? I thought Linux was a better performer and could get away with not having as much RAM. Well, it turns out that even in a lab environment when monitoring a very small infrastructure, 3GB isn't always enough. The fact that I installed DA/DR on a box with only 1GB was pointed to as a possible reason why I was seeing problems on my installation. Wait guys, if i have to dedicate a ton of resources, why don't we just run it on Windows?
Wasn't IM 2.0 supposed to be developed on Java? If that's the case, why does the OS even matter? Shouldn't it be a fairly trivial matter to compile installers for all of the major operating systems?
I'm not a developer, so you really shouldn't be reading any of this without your tough over in your cheek. But still.
Really?
I have to learn Linux?
Really?
I have to purchase RHEL?
Really?
I have to dedicate at least 4GB of RAM in a lab environment?
Really?
Subscribe to:
Posts (Atom)