When a Windows Server has 2 names after Windows Update.

It happens, you’re installing windows updates on a mission critical server, and the restart process gets stuck. Tens of minutes pass by, we get impatient, and we reboot by force.

This is the story of when forcefully rebooting causes a pretty weird problem. An exchange server got renamed, sort-of. At a lower level, the server was renamed as if it were syspreppred (indicated by running “systeminfo”), but in the properties of the server, or even typing “HostName” in CMD, displays the proper name:

Windows server displaying 2 different server names.

Windows server displaying 2 different server names.

What’s troubling is that all communications with Active Directory failed because the server is using a generic server name, not the one specified in the AD account. This causes exchange to fail.

Very Bizarre, but the biggest problem with this as you can see in the screenshot is that simply re-renaming the server doesn’t work. We need to dig deeper to change the computer name.

Can't Rename the computer through traditional means either..

Can't Rename the computer through traditional means either..

 

To fix this issue, we need to go into the registry and modify the computer name manually. Before doing this, just be sure you have a backup, modifying the registry can have some very negative effects. The keys to check are in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ComputerName\ActiveComputerName and HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ComputerName\ComputerName

Location of the Registry that contains the bad server name.

Location of the Registry that contains the bad server name.

The key is ComputerName, modify to the appropriate name, and reboot!

 

One thing that I really want to emphasize is to not remove and re-add to the domain. In my case that could have caused MAJOR issues with Exchange. The SID did not change, so it’s important to keep the SID intact to prevent further issues.

 

 

Getting back into the swing of things!

Well, it HAS been a while since I wrote a post! Lets just say life gives you one curve ball after another. but whats importnant is now I am back, and as passionate as ever to continue what I started here with my blog MicoITBlog.com

There will be a change from the last posts I have done on the blog. Previously I would write about weird issues that I would encounter and just take a quick note about it. Now I would like to discuss about other technologies as just an IT design Post, or review what I am currently reading, or scripts that I am currently writing to make my job just that much easier.

It feels good to be back, you will definately hear more from me soon.

My Sub 1000$ Home Lab

Test, test, test, test.... I gotta test this new OS, I gotta try this new SQL server, I gotta try this new backup solution, I need to practice before doing a certification exam.. As an IT professionnal, the best way to learn is to try the things we learn, and like in anything in life, practice makes perfect.

Here is the problem, trying stuff out in IT can become expensive, running another lab portion of your network can become costly. A typical VMware environment with the capability to vmotion can cost thousands of dollars, 2 servers and a SAN. Even if you don't want a full VMware environment, just a server to run tests can be pretty hefty. What motherboard should I choose? Should I go for an "enterprise ready" motherboard? What about CPU? Should I go dual CPU? RAM? Drive space? What about the OS? Should I use ESXi? Hyper-v? Xen?

I have been thinking about this for a while, and lately I finally got the funds / the gear to assemble my home LAB. Here are the details:

Motherboard: ASUS P8Z77-V LX 134$
CPU: Intel core i5 3330 @ 3.0 GHz 190$
RAM: 4 x Kingston - 8GB 1600MHZ DDR3 NON-ECC CL11 DIMM 260$
Power supply: Spi - 650w eps 24/8 Pins Active PFC ROHS 100$
SSD: OCZ - Vertex 2 3.5" 120GB SATA II MLC Inter 105$
HDD: ST1000DM003 Seagate - Barracuda 1TB SATA Hard Drive w/ 7200R 90$
OS: Ubuntu 12.04 LTS Desktop FREE
Software: Virtualbox FREE
Case: ikea besta cabinet 65$

Total Cost 810$ CDN (before tax)

I know there are some things on that list that seem a bit strange, but I assure you, I will explain.

Motherboard:

This motherboard was the cheapest I could find that could support the intel VT technology, costing me around 128$, its also a board I knew that played nice with Ubuntu in terms of drivers. The fact that it only supports 32GB of Ram and a single processor did not bother me, I know my virtualized environments will require about 2GB of RAM per VM max, which can allow me to run 16 VM comfortably

CPU:

For the CPU, I didnt need a top of the line i7, but at least enough horsepower to work with 16 VMs. The i5 at 3GHz was a good match of speed and affordability.

RAM:

I maxed out my RAM for the board. I wanted to because it beats re-buyi8ng 8GB chips to add to the system. When it comes to RAM, i didn't hold back.

Power Supply:

I went for a 650W power supply, nothing too crazy, just something that will power the unit correctly (if it made noise, so be it).

SSD and HDD:

I went with an SSD and a HDD. I wanted the OS to boot up quick, and also if i needed the possibility of running VMs FAST. 120GB is sufficient for that, If I were to add VMs to the SSD, it would only be the OS virtual hard disk, which usually goes for 20GB. 1TB of Hard drive space is plenty for Data partitions, and ISOs.

OS:

So, why Ubuntu? Why not run ESXi Directly? If you have seen the HCL of ESXi, you would understand my frustration in building a home lab with it. The HCL is very reserved. It makes sense in an Enterprise world, but when your on a budget, you don't want to take a chance with it. I even went with the desktop version instead of the server edition, and went with a type 2 hypervisor instead of a type 1 !?!?! I'll expalin in the next section.

Software:

I chose virtualbox as my virtualization software. The only reason was because virtualbox is free. I would prefer using VMware's Workstation, but that is 250$ which would go over budget (maybe next time!). The reason I went with a type 2 hypervisor is that I would like to have the desktop on my lab setup. I know it sounds strange, but logging into my lab via VNC is something that gives me a sense of separation from my systems and I dont need to run tools on my production PC / Mac Laptop in order to run the lab, and diagnosing issues is simpler with the desktop version of Ubuntu rather than going all command line in the server edition or troubleshooting ESXi. Id rather be doing that in my lab, than rather the lab itself. Another pro is that I can manage my lab ANYWHERE with ANYTHING. Every iDevice and Android Phone has a VNC client, which means I can run tests while I'm in bed, or sitting down in front of the TV with my iPad 2, My MacBook, or my Windows 8 PC.

Case:

Here is another weird decision, furniture for a case? I'll explain. I wanted my casing solution to be future proof. Considering that this lab will cost me very little, I can actually add another system to my LAB, giving it the dual CPU / 64GB of RAM if I so needed, what i didn't want to re-invest was another case. I decided that I wanted my case to hold multiple instances of my current setup. So i went to IKEA, and got a cheap shelving unit for about 100$. When I assembled the unit, I left the back open to make sure that i wasn't suffocating the unit with heat. If i were to add another kit of MotherBoard / CPU / RAM / SSD / HDD, I would just use another shelf.

The results

I got a screamer for a Lab, when I was testing an install of Windows server 2008 R2, it took about 3 minutes to complete a full install. Not bad! My only complaint with Virtualbox, is that it takes all the RAM that you allocated to the VM, instead of giving the VM what it needs and freeing up the resources for other VMs to run. Ahwell, if I need that, I will go to VMware Workstation.

Creating shares via command line/Scripting

Sometimes in IT, we need to be resourceful in order to provide solutions, even though they can be very unorthodox. What happened to me a few days ago was to script the creation of a share. The share in question was located on a removable drive used for backups on a windows server. So before the backup, this script will need to delete any shares of the same name, then create a share called "backup" directly on the root of the removable/USB drive.

To do this, we can use the NET command in a batch file:

NET SHARE sharename:f:\ /grant:everyone,FULL

This will create a share named "sharename" and its location is the root of the F drive, then we will grant everyone the permissions to access the share. THese are share permissions, and they do not modify the NTFS permissions in place, that will be discussed in a future blog post.

NET SHARE sharename /delete /Y

This command deletes the share. the /Y parameter will answer yes if there are still users connected to the share.

So the script in the end will run:

NET SHARE sharename /delete /Y

NET SHARE sharename:f:\ /grant:everyone,FULL

Hot Adding a VMDK to Linux Without Reboot (Rescan SCSI Bus)

With virtualization, we have opened up some possibilities that was not possible on some servers,  one of those possibilities is to add a hard disk on the fly to any server. This is especially great when you need to add more storage to a file server / backup / etc..

Now don't get me wrong, I know we could have done this before. But there was a cost barrier to that kind of solution with specific hardware, virtualization enables this on all VMs.

In windows, when adding a new virtual hard drive, the system will just detect the change and publish it in my computer (although sometimes a rescan in the server manager is required). In linux, its not that straight forward when we are dealing with a GUI-less server. 

Thankfully there is a command that will rescan the SCSI bus and detect new hard drives:

echo "- - -" > /sys/class/scsi_host/host0/scan

this will detect new hard drives and display them in /dev

After that, mount the new drives to whatever mount point you choose.

Before Rescanning the bus, only 2 hard drives exist (sda, and sdb)

Before Rescanning the bus, only 2 hard drives exist (sda, and sdb)

After running the command, sdc is now visible to the system.

After running the command, sdc is now visible to the system.

Installing Multiple Instances on Tomcat on the same server

Tomcat is a pure JAVA HTTP server to host java applications. While testing these applications, we need multiple instances of Tomcat. We can achieve this as a scale out of Virtual Machines or deploy multiple instances of Tomcat on the same server.

 In this article, I want to go through the steps of creating multiple Tomcat instances on the same box. The instances can be the same version or multiple versions, for example, tomcat 6 and 7 running in harmony.

First, we need to get the JDK from oracle and the .tar from apache.

Next, we need to install the JDK on the system,

Installing JDK

Note: As of this writing the most up-to-date version of the JDK is version 7u7, and that is what im using for this article. Mind you it is the same process for previous versions.

Lets make the directory /usr/java and go to it:

mkdir -p /usr/java/
cd /usr/java

Lets make the rpm file that we downloaded into an executable file. I placed my downloaded files in a folder called /temporary:

chmod 700 /temporary/jdk-7u7-linux-x64.rpm

Install it with rpm -i :

rpm -i /temporary/jdk-7u7-linux-x64.rpm

The JDK is now installed, lets install tomcat!

Tomcat

Tomcat is actually very simple install. It is a matter of extracting the files in a location, modifying a few scripts, and running the startup scripts. For those who are not 100% familiar with how tomcat works, it sets up using the environment variables of the user that is executing the startup script. 

First things first, what we would like to do, is create a new user called tomcat and make it impossible for it to logon.

groupadd tomcat
useradd -g tomcat -s /usr/sbin/nologin -m -d /home/tomcat tomcat

Now lets extract the files. Remember, our files are in the folder /temporary

tar -xf apache-tomcat-7.0.32.tar.gz

Copy them to the /opt folder

cp /temporary/apache-tomcat-7.0.32 /opt/tomcat7

For every instance, we need to copy the folder over. For this exercise, we will create 2 instances:

cp -rf /opt/tomcat7 /opt/tomcat7-1
cp -rf /opt/tomcat7 /opt/tomcat7-2

Change the owner of the folders to the group tomcat and the user tomcat:

chown -R tomcat:tomcat tomcat7-1
chown -R tomcat:tomcat tomcat7-2

Now we need to modify the startup and shutdown scripts. In order to do so, lets go into the bin folder of each instance.

cd /opt/tomcat7-1/bin

In this folder, is the startup.sh and shutdown.sh script. We need to modify these scripts to include the right environmental parameters. Add these commands to the startup.sh script:

export JAVA_HOME=/usr/java/jdk1.7.0_07
export PATH=$JAVA_HOME/bin:$PATH
export BASEDIR=/opt/tomcat7-1
export CATALINA_BASE=/opt/tomcat7-1
export CATALINA_HOME=/opt/tomcat7-1

It should look like this:

How your script should look with the script modifications.

How your script should look with the script modifications.

Once done, do the same for the shutdown script (shutdown.sh).

Next, in the conf folder, we need to modify the server.xml file to modify the server ports used for tomcat. This is how we do it:

There are two ports needed: (1) the actual http port and (2) the shutdown port. For every instance we will create, these ports need to be different (Later we will see how to use the same ports with another method).

For our first instance, we can leave the defaults, 8080 and 8005. For the next instance, we will change the ports. (I like to add a 100 to the default which means 8180 and 8105 for the second port)

Configuring the shutdown port from 8005 to 8105 in server.xml

Configuring the shutdown port from 8005 to 8105 in server.xml

Configuring the http port from 8080 to 8180 in server.xml

Configuring the http port from 8080 to 8180 in server.xml

Now to run the instance of Tomcat, lets execute the script as the user tomcat:

su -p -s /bin/sh tomcat startup.sh

Lets check out our separate tomcat instances:

http://ipaddress-of-server:8080

Tomcat7-1 Operational

Tomcat7-1 Operational

http://ipaddress-of-server:8180

Tomcat7-2 Operational

Tomcat7-2 Operational

Success, now we are up and operational!

Next Steps

In order to get into the manager, we need to configure the user accounts for it. We need to modify the file tomcat-users.xml in the conf folder:

  <role rolename="manager-gui"/>

  <user username="admin" password="tomcat" roles="manager-gui"/>

Here we added the role "manager-gui" and applied it to the user "admin" with the password "tomcat".

HAProxy

I mentioned earlier that there is a way to have all your instances of Tomcat to respond on the same http port. We can achieve this using a server called HAProxy. HAProxy is an open source load balancer that you can use to redirect http requests to web servers that are using different ports. For a full breakdown of the configuration, please read my post on HAProxy here.

New Site Design

Hello All,

I just wanted to thank every visitor that I have had since I started the blog, and I hope that my posts have been helpful to you. I was considering a redesign for a while now. If you recall, the other design was based off of a standard template and hosted on wordpress. I decided to move the blog to squarespace, after hearing good things about the service, and honestly, it was time for me to upgrade.

The move happened this morning, and was literally just a DNS change, which should have propagated over the internet by now. If you have any suggestions for the site, I am totally open to them, please contact me by the link to the left.

I hope that the blog is as fun and interesting to read, as it is to post on it!

Cheers!

Modifying Default User Profile with Registry Edits

I'm currently working on a VDI (Virtual Desktop Infrastructure) deployment at my work, We are currently using VMware View 5.0. The Deployment is going well, and overall im impressed with the linked clone technology and fast provisioning of desktops.

It's something of a wonder that with a click of a button, I can have a desktop ready for the end user.

But, like any solution there is always a snag that, we as IT professionals, need to overcome in order to fully deploy the solution. This came with a certain CRM program (TigerPaw) and when a new user profile is created on the Virtual Machine.

By default, when a new user logs onto a system, a small setup is required to connect the program to the SQL backend. Giving this task to the end user is totally unacceptable, and kinda kills that mentality of the push and go solution VMware View promises.

Recent Databases not populated with any SQL databases when logging into a computer for the first time.

Recent Databases not populated with any SQL databases when logging into a computer for the first time.

I noticed that the settings for the application is not saved in the user's profile, but in the user's Registry HIVE.

So my goal is to have the system automatically configure itself through registry edits without IT Support intervention.

WARNING

Before I get into the big details on how to do this, I just want to mention that this configuration has no guarauntee, and is given  AS IS. You SHOULD BACKUP FIRST before modifying anything on your systems. And I can't be held responsible if something happens to your system during the configuration. Try and configure a lab first before going into production!

Background on HKEY_CURRENT_USER

When a new user logs onto a system, it copies everything from the "Default" user (c:\Users\Default) to the new profile. In this defualt user is the registry hive NTUSER.DAT

We can actually load this hive and do as many modifications as we can, and then unload it and log a new user.

How to do it

Open the registry Editor by going to Start and typing in REGEDIT:

Opening Regedit

Opening Regedit

This will open the registry editor, now select HKEY_LOCAL_MACHINE, then go to File -> Load Hive...

Loading a Hive

Loading a Hive

Now go to c:\Users\Default and select NTUSER.DAT (If you don't see NTUSER.dat, make sure you can see hidden and system protected files in the folder options)

Transient

In the key name, type TEMP

Transient

The hive will load, and as you can see, its almost the same as what is used on the current user:

Transient

Now for my current situation, I need to add the keys "Tigerpaw Software" and "TigerpawV11" in "Software", and add the values that will point the user automatically to the proper SQL server:

Transient

IMPORTANT: Make sure you unload the TEMP HIVE, otherwise the user profile service may hate it., and not load new user profiles.

Transient

Now when a new user logs in, they will grab these settings. Now there is less overhead in user setup. The dream of that one click end user deployment is now closer than ever.

User Profile Screen before user logs in

User Profile Screen before user logs in

Logging in as the new user

Logging in as the new user

The TigerPaw Registry edits have been pushed to the new user profile.

The TigerPaw Registry edits have been pushed to the new user profile.

Now I have a pre-populated SQL configuration!

Now I have a pre-populated SQL configuration!

Linux + Active Directory.. SSO bliss

As an IT consultant, sometime you run into situations that you need to compromise, Do I really need to buy another windows license to have a file server? I run linux, do I have to use windows in order to leverage Active Directory.The answer to these questions is no, we don't need to use windows in order to have a file server / software repository / any other service we can think of, and link to Active Directory. By using Kerberos and LDAP we can have a single sign on environment with Active directory through linux servers.

Warning

Before I get into the big details on how to do this, I just want to mention that this configuration has no guarauntee, and is given  AS IS. You SHOULD BACKUP FIRST before modifying anything on your systems. And I can't be held responsible if something happens to your system during the configuration.

My setup

This was done using CentOS 5.5 with a Windows server 2008 R2 Active Directory Schema

Making sure AD is ready

Before messing with linux, there is one prerequisite that needs to be installed with AD, and this requires a reboot. That is UNIX for active directory. This is an extension to the active directory attributes by adding such things as the UID, GID, shell location, and home folder. In Windows Server 2008R2, in Server Manager, simply right click on the role "Active Directory Domain Services" and select "Add Role Services".

adlinux1

adlinux1

Select Identity Management for UNIX and all its sub components.

adlinux2

adlinux2

Once the install is done, simply reboot the server

adlinux3

adlinux3

Once installation is finished, you should have a UNIX tab when going into a user or group's properties:

adlinux5

adlinux5

There is one more configuration that we are going to need to do to test Single Sign On (SSO) with Linux, We need to configure a user with the UNIX properties in Active Directory. The NIS server already has configured itself to use the NETBIOS name of your domain. In my example, I modified administrator with a UID 10000, set my environment varialbe to /bin/bash and specified a basic home folder in linux along with a group.

adlinux4

adlinux4

(Note: It is good to use a User ID that is 10,000 and bigger, this will remove conflicts with other user account in Linux)

Also before we go to linux, its good that we create a user that linux can use to browse AD

HERE COMES LINUX

So with AD ready to go, lets login to our Linux machine. Make sure that your DNS configuration is set on the linux server. Joining an AD server without DNS doesn't really work great. Modify /etc/resolv.conf to have this:

nameserver x.x.x.x

nameserver x.x.x.x

There are 4 Configuration Files that need to be modified, krb5.conf, nsswitch.conf. ldap.conf and system-auth. Best thing to do right now, is setup a second connection to the linux box ( either by SSH, or by pressing CTRL + ALT + F2).

The reason to have a second connection is that we are tinkering with how linux accepts authentication, and a mis-configuration in certain files (namely system-auth) can cause issues for even root to logon.

Also, lets backup the configurations before we modify them. I like to create a nice /backupconfig folder at the root and copy all the configs there

mkdir /backupconfig

cp /etc/krb5.conf /backupconfig

cp /etc/nsswitch.conf /backupconfig

cp /etc/ldap.conf /backupconfig

cp /etc.pam.d/system-auth /backupconfig

Got that? Great!

First lets modify krb5.conf. In the following configuration, I'm configuring these files with the domain test.local, that has a Domain Controller called DC.test.local. So now with either vi or, my personal favorite "nano"

nano /etc/krb5.conf

[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = TEST.LOCAL
dns_lookup_realm = true
dns_lookup_kdc = true

[realms]
TEST.LOCAL = {
kdc = dc.test.local:88
admin_server = dc.test.local:749
default_domain = test.local
}

[domain_realm]
.test.local = TEST.LOCAL
test.local = TEST.LOCAL

[kdc]
profile = /var/kerberos/krb5kdc/kdc.conf

[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}

It is very VERY important that we respect Captialization, otherwise things just dont work.

So as you can tell, this is the configuration for kerberos. We are telling linux that TEST.LOCAL is our default domain, and in order to talk to TEST.LOCAL, use dc.test.local. We also said that they can use DNS to look up dc.test.local.

Also, if anyone uses test.local for authentication, please use TEST.LOCAL instead.

In appdefaults, we tell pam (the authentication module in linux) to not debug, don't convert this information to Kerberos version 4, and you can forward the information. and the kerberos ticket is good for 36000 seconds (10 hours)

Once that is done, lets work our way to /etc/ldap.conf. What I would do is erase everything in ldap.conf, and rewrite a nice fresh clean config like so:

nano /etc/ldap.conf

base dc=test,dc=local
uri ldap://dc.test.local/
binddn unixjoin@test.local
bindpw password
scope sub
ssl no
nss_base_passwd dc=test,dc=local?sub
nss_base_shadow dc=test,dc=local?sub
nss_base_group dc=test,dc=local?sub?&(objectCategory=group)(gidnumber=*)
nss_map_objectclass posixAccount user
nss_map_objectclass shadowAccount user
nss_map_objectclass posixGroup group
nss_map_attribute gecos cn
nss_map_attribute homeDirectory unixHomeDirectory
nss_map_attribute uniqueMember member
tls_cacertdir /etc/openldap/cacerts
pam_password md5

What ldap.conf does is setting linux authentication parameters, like "passwd", "shadow", and "group" to a base within active directory using LDAP queries. Then it maps certain attributes from linux to Active directory, for example, the object "homeDirectory" in linux will be mapped to "unixHomeDirectory" in Active Directory.

Ok, half way done. Now we need to tell linux the methods that we are going to authenticate, much like the host.conf file says in which order hostname resolution will work.

nano /etc/nsswitch.conf

passwd: files ldap
shadow: files ldap group: files ldap

#hosts: db files nisplus nis dns hosts: files dns

# Example - obey only what nisplus tells us...
#services: nisplus [NOTFOUND=return] files
#networks: nisplus [NOTFOUND=return] files
#protocols: nisplus [NOTFOUND=return] files
#rpc: nisplus [NOTFOUND=return] files
#ethers: nisplus [NOTFOUND=return] files
#netmasks: nisplus [NOTFOUND=return] files

bootparams: nisplus [NOTFOUND=return] files

ethers: files netmasks: files networks: files protocols: files rpc: files services: files

netgroup: files ldap

publickey: nisplus

automount: files ldap

aliases:    files nisplus

As you can see, this tells linux, try local files first for authentication, then try LDAP.

The final portion of AD authentication in linux is to modify system-auth. This file is part of the pam module. We touched a bit on it before, its the authentication service of linux, so if there is misconfiguration in this part, then, don't say i didn't warn you..

nano /etc/pam.d/system-auth

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required pam_env.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid >= 500 quiet
auth sufficient pam_krb5.so use_first_pass
auth sufficient pam_ldap.so use_first_pass
auth required pam_deny.so

account required pam_unix.so broken_shadow
account sufficient pam_succeed_if.so uid < 500 quiet
account [default=bad success=ok user_unknown=ignore] pam_ldap.so
account [default=bad success=ok user_unknown=ignore] pam_krb5.so
account required pam_permit.so

password requisite pam_cracklib.so retry=3
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok
password sufficient pam_krb5.so use_authtok
password sufficient pam_ldap.so use_authtok
password required pam_deny.so

session optional pam_keyinit.so revoke
session required pam_limits.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_krb5.so
session optional pam_ldap.so
session required pam_mkhomedir.so

These modifications will tell pam what module to use and if they are optional, or required to login and create a session on this box.

Now the moment of truth, its time to test Single Sign On for Linux. The first command is kinit

kinit administrator

replace "user" with the user that you added UNIX attributes to. if successful, you should be asked a password, and then sent back to your prompt without any other messages. run klist to list the kerberos ticket. kdestroy to destroy the ticket. One last test is "getent passwd". This command will query your local passed file, and then question AD for UNIX capable Users, for now, we should only see the user that you modified earlier.

The fastest way to test authentication is by establishing an SSH session from the same system:

ssh administrator@localhost

obviously use the same user as configured in active directory at the beginning. You should be able to logon without any hiccups, and.. it will create a home folder (if there wasn't one already).

Congrats! You got it! But wait, why isn't the Linux box listed in AD as a computer account. We can do this by continuing a bit more and adding the computer account from within linux.

To do this, make sure samba is installed.

yum install samba

Now, we need to modify some parameters in /etc/samba/smb.conf:

nano /etc/samba/smb.conf

workgroup = TEST

security = ads

passdb backend = tdbsam

realm = test.local

password server = dc.test.local

Now the last part is to run "net rpc join -U admin"

Replace admin with an administrator account in your active directory environment.

Voila! Done! Now you have Single Sign On with AD on your linux box.

Tip of the Week - Feb 21st 2012 - whoami

This is a new weekly post with a tip for Windows, OS X, Linux, iOS, Android... anything really that I hope could help others in their daily computing lives. This week's tip is a Windows tip. Ever wonder what permissions you have in your organization? What groups are you part of? What is my SID?

There is a nice command introduced back in the Windows XP days called whoami. First introduced as part of the support tools, and now part of the standard install of windows, this command can give you all the information about the currently logged on user.

If we just issue whoami in CMD, we will get this:

Nothing spectacular, but lets look at the flags to the command:
We can see there is a /ALL flag, lets see what happens when we run whoami /all
(Important SIDs are whited out)
We can see a whole bunch of information, like my username, my SID, domain group memberships and even my permissions.
So if you ever want a user to send you their information, you can make a batch script that has:
whoami /all > userinfo.txt
This will save this information into a text file that the user can send your way and you can see all their group information and make changes as necessary.

Enabling TRIM Support on OSX

Getting an SSD is probably the single most amazing thing that ever happened to my personal computing experience. It made my Mid-2010 MacBook Pro 13" something from an OK experience, to a 1st class experience. The system boots fast, Applications load faster, and I'm much more productive.

One thing that's important in the SSD life is TRIM support. TRIM, in short, is the garbage collecting that is needed to clear deleted data on your SSD.

To go into more detail, When the operating system deletes data off of the hard drive, the SSD doesn't actually clear the bits for that data, it just removes it from the allocation table. The TRIM command is sent from the Operating system to eventually clear those bits and make them ready to be written to again.

If TRIM wasn't available from the operating system, eventually the SSD will be slow because the SSD would need to find free bits, and then clear them.

Wikipedia article on TRIM

Fortunately most modern Operating Systems do support TRIM, (Windows 7, and OSX 10.7). The problem with OSX is that its not enabled by default, unless it was an apple branded SSD.

I got an OCZ Vertex 2 120 GB SSD, and when I went to check for TRIM support after a re-install, TRIM was not supported (You can see this by going into About this Mac > More Info > System Report > Serial-ATA). You can enable this but it requires a reboot and some terminal work.

There is a good article from this website.

They point to a document but there was an issue with the way the quotes work, so Ill post the commands here, but just for the sake of completeness you can find the document here.

First backup the file in question:

sudo cp /System/Library/Extensions/IOAHCIFamily.kext/Contents/PlugIns/IOAHCIBlockStorage.kext/Contents/MacOS/IOAHCIBlockStorage /IOAHCIBlockStorage.original

Now time to use perl to modify the file:

sudo perl -pi -e 's|(\x52\x6F\x74\x61\x74\x69\x6F\x6E\x61\x6C\x00).{9}(\x00\x51)|$1\x00\x00\x00\x00\x00\x00\x00\x00\x00$2|sg' /System/Library/Extensions/IOAHCIFamily.kext/Contents/PlugIns/IOAHCIBlockStorage.kext/Contents/MacOS/IOAHCIBlockStorage

Clear the kext caches:

sudo kextcache -system-prelinked-kernel

sudo kextcache -system-caches

Now reboot your Mac.

Now you should see TRIM support as Yes.

Windows 7 Profiles Wont Load. Stuck with Backup Status.

There are bugs that you can get around with, and then there are some that are just weird.. I came across one that involved a weird state for a local windows profile. Usually this will prompt you to simply solve the issue by just re-creating the profile. This usually solves the issue, but there is a faster way to resolve issues that involve with the profile with a backup status:

The image above doesn't really show a backup status but, you get the picture. The profile in question will load the home directory c:\users\TEMP. The user's desktop won't be the same, outlook won't have the same profile, and the user's favorites will be gone. Lets not panic, the folder in question is still there. The user is just not properly mapped to the right home directory.

First thing to do is to reboot the workstation in question. If you still have the same issue, we will need to modify the registry.

NOTE: Modifying the registry is risky, and even if you follow the instructions word for word, I can't guarantee success or a corrupted windows or loss of data. Please proceed at your own risk.

This remedy is taken from this Microsoft KB article, but ill mention it here for completeness and add my thoughts to each task.

http://support.microsoft.com/kb/947215

Go to Start and run REGEDIT

Go to:

  • HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList
In this key, you should see something similar to this:

The S-1-5-21 keys actually are the configuration of the profiles in windows. one thing to notice is that there are two that are strikingly similar (S-1-5-21-1079119....) but with one difference, the .bak at the end of the key at the bottom.

Lets take a look inside the key:

A healthy profile should look like this:

A cool thing to note is that the ProfileImagePath points to the home directory of the user. An unhealthy profile will display the ProfileImagePath to c:\users\temp, and the RefCount will have a value higher than 0.

To solve the problem, login to an administrator account other than the one that has the issue.

Next modify the key name that doesn't have the .bak to .ba.

Now Rename the key that HAS the .bak and remove the .bak.

Finally modify the key name that HAS the .ba to .bak

Once that is done, you will need to modify a few more things in the key without the .bak.

We need to change the RefCount to 0

We need to clear the state in State to 0

Now its time to reboot and try to login.

This worked for me. What your more or less doing is manually changing the state of the profile from the backup state to a local state. This is something similar to when a windows server cannot remove the restart pending status.

VMware Fusion 4 Can't add USB devices to a VM

Another weird bug cam across my desk this week, this time, dealing with VMware Fusion and Mapping USB devices to a VM. The problem is that when you connect a USB device to via the menu to the VM in question, VMware fusion never actually does the attachment to the VM, thus scratching your head say.. is it the Windows the problem? is it the port? what is it?

Turns out that this problem can be caused by permissions! Go into Terminal on the Mac and type:

ls -ld /

this should present the permissions:

drwxr-xr-x  33 root  wheel  1190 14 Dec 09:17 /

If its anything else, then we are going to need to fix the permissions.

FIrst place to go is in the disk utility, select your disk, and then click "Verify Permissions" and  "Repair Permissions"

If this doesn't solve the problem, you can always go Linux and type the following commands in the Terminal window:

sudo -s

chown root:wheel /

chmod 755 /

This will require a reboot of the Mac, and now you should have the right permissions to map the USB devices!

 
Original KB Article: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004687

IMCEAMAILTO errors in Exchange

Ever get this weird error?

mailto:IMCEAMAILTO-emailaddress%2B40domain%2B2Ecom@localdomain.com>

#550 5.4.4 ROUTING.NoConnectorForAddressType; unable to route for address type ##

I encountered this weird error today. What is that weird e-mail address? IMCEAMAILTO ?? This is what outlook interprets when a user clicks a hyperlink for a " mailto:emailaddress@domain.com" within an email in outlook to send a new message.

For some reason, outlook does't actually parse the MAILTO: but instead adds the MAILTO: as part of the email address that you are sending to.

The problem doesn;t stop there unfortunately, because outlook wants to remember every address you ever wrote to )the suggested contacts list), it will actually save the email address for that specific contact, which, will present itself as a contact called "emailaddress@domain.com" with the email address "MAILTO:emailaddress@domain.com".

To solve this problem permanently for that specific address, we need to dig a bit more and change the email address stype back to SMTP:

HTTPS Proxies for OWA and ActiveSync..

With a lot of modern routers (ISA, Watchguard, etc..) we can put some proxy actions for publishing services. What's the advantage? We can monitor the entire conversation between the client and our web server. Just like client outbound proxies, however, there can be some mishaps. One good example is how a WebDav server can behave under a http proxy, you may get mixed results. OWA (Outlook Web Access) is a WebDav server for IE clients, and sometimes you may get errors like not being able to see your inbox, but you can see your folders just fine, or Active Sync just not working at all.

First lets look at the OWA error. In my example, I'm using a Watchgaurd XTM firewall with a HTTPS proxy to publish OWA. With the Proxy's default values, we can log into OWA, but showing anything in the inbox keeps a "loading…" message. In order to make the inbox come up, we need to add a simple checkbox:

Remote Desktop Connection

Which bypasses proxy actions to allow WEBdav.

Next, lets look at ActiveSync. ActiveSync will just not work with watch guard's default HTTPS proxy. The best way to diagnose it to try to go to the ActiveSync web page:

Watchguard http proxy  Google Images

With this, we need to allow the "Option" method in the HTTP protocol:

Watchguardhttpoption

Time Machine Server on OSX Lion.. why won't you work?!?!

I recently got myself a Mac Mini (mid-2011) to act as a Media Center, and as a server for my home environment. I will admit, things were not as smooth as I anticipated.. Apart from not having control of DHCP and DNS from the default Server.app (not that I'm bitter), having to download the remote server admin tools to control open directory.. The Time Machine server function never "just worked" for me.

On the server.app, setup is plainly simple. Choose your Disk, and turn it on:

NewImage

So the setup is practically seamless. How does another Mac backup to the time machine server? The server uses bonjour to broadcast the backup service. What's presented to your Mac is a share on the server called "Backups":

System Preferences

What SHOULD happen is backups over Wifi, pretty cool! One problem, troubleshooting this thing is not user friendly AT ALL, as in my case:

All Messages

What does "NAConnectToServerSync failed with error: 80" mean?

Of course, Lion is brand spanking new, so googling for help was useless (especially for lion server), Turns out, my password that I was using was the culprit.

In my password, I had a special character "$". This messes with the mount_AFP command that is issued to backup. The Solution? Create a Backup user without special characters for its password.

Now, with this considered, I find this HORRIBLE! How, in this day and age, not allow special characters for passwords in order for stuff to work? It's beyond me. A lot of my server experience has been a big mess. In windows when I DCPROMO a server, it installs DNS, why is DNS and DHCP so buried in the settings.. I don't get it..

Hopefully Apple can get on this and put the same Quality Control it does like its consumer products.. Hell, 50$ Server License for all your Macs, you can pretty much call it a consumer product.

Goodbye EXMERGE! Hello Powershell

Remember the good old days when you wanted to export an e-mail account out of exchange for archiving, or just general backup purpose? We admins needed to install EXMERGE!Exmerge was, and still is, a blessing to admin's everywhere, it was a powerful tool that gave you more control of exporting or importing mailboxes in exchange, packaging everything up in a nice .PST file so you can re-import, or open it up with outlook. Let's face facts though, by today's standards, its not the most elegant/modern solution going. I was happy to see that Microsoft added this functionality in exchange 2010 through Powershell, and no Outlook required!

First off, we need to add your AD account as part of the mailbox import export role, lets fire up the Exchange Management Shell and type up:

New-ManagementRoleAssignment –Role “Mailbox Import Export” –User domain\AdministratorAccount

Before we start exporting and importing, there is one small snag, we need to use network shares for output and input of pst files. Of course, it can be a share within the exchange server itself. (Make sure you have full read and write permissions on the share!)

So lets start with Exporting.

When your importing or exporting, you issue a request, think of it as moving a mailbox in the Exchange Management Console. The request holds the status of the job, even when the job fails or completes.

To start an export request:

New-MailboxExportRequest -Mailbox user -FilePath "\\server\share\user.pst"

This will issue an export request.. now what? We can list the export request by issuing:

get-mailboxexportrequest

There is a more detailed output:

get-mailboxexportrequeststatistics

this is good, but now i want the full details of the request I just made:

get-mailboxexportrequeststatistics -identity user\mailboxexport | fl

If we want to create a mailbox import request, its the same commands, but just change "export" to "import"

New-MailboximportRequest -Mailbox user -FilePath "\\server\share\user.pst"

get-mailboximportrequest

get-mailboximportrequeststatistics

get-mailboximportrequeststatistics -identity user\mailboximport | fl

Save Public IP addresses use HA Proxy for multiple hosted web apps

Sometimes you need to publish a bunch of web servers, but don't have enough public ip addresses to publish them with. Usually virtual hosts come to the rescue, but what if you have multiple instances of Apache, or just multiple web servers?

There is a way to redirect these requests by using only 1 public IP, and best yet, its completely free! (IN money, not time!)

DiagramHAPROXY.jpg

DiagramHAPROXY.jpg

What you will need: A distro of linux (I like CentOS) An available machine / be able to create a virtual machine

After installing your Base OS, your going to need to do some "wget" to get the source files to install.

First create a folder:

mkdir /installer cd /installer

Now its time to get the latest source package of HAProxy:

wget http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.15.tar.gz

Lets extract it by:

tar -xf haproxy-1.4.15.tar.gz

Now issue a:

make install

Lets copy haproxy to the sbin folder:

cp haproxy /usr/sbin/haproxy

Now lets go to the etc folder:

cd /etc

and make a new file called "haproxy.cfg" and enter this in the file:

nano haproxy.cfg

global maxconn 4096 # Total Max Connections. This is dependent on ulimit
daemon  
nbproc 4 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example.
defaults
 mode http
 clitimeout 60000 
 srvtimeout 30000
 contimeout 4000
 option  httpclose # Disable Keepalive

frontend http-in
 bind *:80
 acl is_server1 hdr_end(host) -i server1.com
 acl is_server2 hdr_end(host) -i server2.com

use_backend server1 if is_server1
 use_backend server2 if is_server2

backend server1
   balance roundrobin
   cookie SERVERID insert nocache indirect
   option httpchk HEAD /check.txt HTTP/1.0
   option httpclose option forwardfor
   server Local 192.168.1.x:80 cookie Local

backend server2
   balance roundrobin
   cookie SERVERID insert nocache indirect
   option httpchk HEAD /check.txt HTTP/1.0
   option httpclose
   option forwardfor
   server Local 192.168.1.x:8080 cookie Local

A little bit about this config a little later.

Lets finish the install, lets get the launcher:

wget http://layer1.rack911.com/haproxy/haproxy.init -O /etc/init.d/haproxy Now finish the startup setup: chmod +x /etc/init.d/haproxy chkconfig --add haproxy chkconfig haproxy on

Now you can start and stop the service by running:

service haproxy stop service haproxy start So what about the config file? lets focus on a few section of importance:

The first section is the ACL section:

frontend http-in
bind *:80
 acl is_server1 hdr_end(host) -i server1.com
 acl is_server2 hdr_end(host) -i server2.com

use_backend server1 if is_server1
 use_backend server2 if is_server2

this is saying "Im creating this rule called 'is_server1' and in this rule, i want you to check the header information (hdr_end(host)) and see if it matches with server1.com" This same mentality is applied to server2.com

The second part is stating "redirect to backend server 'server1' if the rule 'is_server1' is true"

So far, so good, now lets take a look at the "backend" section of "server1":

backend server1
   balance roundrobin
   cookie SERVERID insert nocache indirect
   option httpchk HEAD /check.txt HTTP/1.0
   option httpclose option forwardfor
   server Local 192.168.1.x:80 cookie Local

In brief, what this is stating is "this is the configuration for server1, if you are accessing this section, please redirect to server 192.168.1.x:80"

So to add or remove servers in your configuration, all you need to do is add to these two sections the new configuration, and your all set.

Mounting VMFS with Ubuntu

The vSphere platform is in my opinion the most complete package for the virtualized datacenter. One of my gripes about it is the manipulation of data in the VMFS file system.. its mostly controlled via the vSphere client, and going deep into CLI on the host itself.That isn't a problem when you live only in a vSphere world but sometimes, for debugging or troubleshooting, an extra tool to get the data off of VMFS wouldn't hurt.

Thats where this tip comes in, there are tools out there that you can use on different operating systems to mount the VMFS datastore. For this article we are going to be using Ubuntu 11.04 desktop to mount a VMFS hard drive.

First off, we are going to need the tools, these are called simply VMFS-TOOLS, and luckily we can get it with a simple apt-get

sudo apt-get install vmfs-tools

Now this is a package install, so there are 3 commands included in this package:

vmfs-fuse debugvmfs fsck.vmfs

For this tip we are going to use vmfs-fuse, which is the utility to mount VMFS.

Lets MAN into VMFS-FUSE...

vmfs-fuse VOLUME MOUNTPOINT

Simple enough, now the only problem is that if we look into /dev and ls, we get a lot of permissions for the disk in question:

2011 05 28 1142

Thats right, 8 partitions esxi created, a simple fdisk will give us the readout of which one is the VMFS partition:

2011 05 28 1145

sdb3 it is!

sudo vmfs-fuse /dev/sdb3 /mnt/vmfs

if there is no errors, it should have mounted. Trying to explore the filesystem is not possible, thats because of the funky permissions it gives:

2011 05 28 1148

This removes only a little convienience, because we can still browser by using the paths

sudo ls /mnt/vmfs -lah

2011 05 28 1151

The pure intention of this tip is to copy VMs out of the VMFS file system for archiving or troubleshooting purposes. So we can copy the VM "test vm" to a folder in ubuntu.

Note that when you copy, all thin disks will be converted to Thick, meaning a thin vm of 8GB will take the full 8GB of space on your local file system.

2011 05 28 1137

Reference: http://www.planetvm.net/blog/?p=1592

Mounting CIFS shares through Linux with write permissions

Mounting a CIFS share in Linux is fairly simple:

sudo mount -t cifs //server/share /mount/point -o username="user"

So after this you can mount and read the share, problem is, you can't write to it, lets look at the permission (mounting to /mnt/test)

[caption id="attachment_8" align="alignnone" width="300" caption="Permission issue with mount.cifs"][/caption]

We get 755 permissions, which are good for just reading and executing. The problem is that a regular user on the desktop can't write to the share. What we need to do is modify the command:

sudo mount -t cifs //server/share /mount/point -o username="user",uid="uid#"

Where UID# is the local user's User ID. This will keep the folder permissions to 755, but modify the ownership to the user specified in the UID.

If you don't know your UID. enter this:

id -u "username"

This helped me a lot recently with a fedora desktop for a user (who says that these days) and I hope it helps you as well.

- Timothy Matthews