Looking at my blog posts, you will see I have experience in many different areas. To me, this is one of the benefits of working for a small to medium sized business, lots of different systems designed and deployed. Resulting in lots of different solutions. Below are some of the projects I am working on or have worked on in the past. I realize that this page is long and dense – some of these descriptions were taken from my project plans, but I thought I would document what we did, and why.
Implemented ADFS/Azure Active Directory Connect solution for SSO
- Overview: We wanted to move to Office 365, but first we needed a SSO solution.
- Goal: We wanted to setup a ADFS implantation in Azure Infrastructure as a Service (IaaS).
- Solution: If you look below, you can see that we successfully extended our infrastructure to Azure via IPsec tunnel. This tunnel allowed us to provision 2 Domain Controllers (in 2 different storage accounts), 2 ADFS servers in an AvailabilitySet, behind an Azure InternalLoadBalancer (in 2 different storage accounts), and two ADFS Proxies in an AvailabilitySet (again, 2 different storage accounts). In our Azure instance we created a DMZ subnet and wrote both inbound and outbound NetworkSecurityGroup rules (ACLs to me) to house the ADFS Proxies. We then added a load-balanced endpoint for SSL and our ADFS infrastructure was setup. Finally we added the Azure Active Directory Connection (dirsync) client and our users and groups were replicated to Azure Active Directly. We have a highly available ADFS implementation, and we are ready for the cloud.
Extended Local Datacenter to Azure
- Overview: We want to migrate some systems to the cloud. But we had to connect to them first.
- Goal: Connect Local networks with Azure
- Solution: This was something I had explored when Azure first allowed site to site connections (but this was before you could connect multiple sites at the same time). We were adding a new office in Europe, and I demonstrated that I could create an Azure site in the North Europe zone and connect it to our office. With this connection, we could provision IaaS resources, and users in the new European office would connect to them. At the time, it was not practical as traffic had to flow through he European office to get to the Azure site (again you couldn’t have multiple connections at first), but it was a valuable POC. Couple of years later, we are ready to move some resources to the cloud. I have now used Cisco ASA and Microsoft RRAS servers to connect multiple sites to an Azure network. We are ready for the cloud.
Implemented Two Factor Authentication
- Overview: Management wanted two factor authentication (2FA) (we ready for them!)
- Goal: Require two factor authentication on all external logins
- Solution: I research and implement Symantec VIP 2FA in a proof of concept (POC) environment. I was surprised how easy it was! I setup a trail account linked it my home lab, and demoed it to our team. Within a week, the owners of the company asked (not knowing about my POC) for 2FA. We had solution and rolled it out within a week. Hooking Symantec VIP into NetScaler, OWA, and Cisco VPN was even easier. A quick win.
Implemented a new Citrix Netscaler/XenDesktop infrastructure
- Overview: We wanted a better Application/User Environment Delivery system, one that can grow with our business. A solution that delivers secure User Environments and future application to internally, externally and to any device.
- Goal: Move from old clunky VPN/Remote Desktop solution a more robust solution
- Solution: I researched and implemented a complete Citrix XenDesktop 7.6 environment. Fully redundant with a disaster recovery site. We provide both Remote PC access so users could continue to access their workstation as they are accustomed, and a Remote Desktop Session Host (RDSH)(Terminal Server) environment for quick logins. This XenDesktop foundation paves the path towards a Virtual Desktop Infrastructure (VDI) and more flexible User Environment delivery. We are currently testing Microsoft’s UE-V product to provide access to a user’s environment, items like FireFox/Chrome bookmarks wherever they login. Microsoft UE-V seems like it may meet our needs to roam without profiles.
- Goal: Make it easy to add a new office anywhere in the world
- Solution: With our new Citrix infrastructure, all we really need in a new office is a IPSec endpoint, a Windows Server acting as a Domain Controller (DC), some thin clients and a printer! In the past we have focused on bringing the data to the users. This is not practical as you add more users and offices. We now have a platform to bring the users to the data (we are heavy reliant on files shares for example).
- Goal: Bring up a new Cisco firewall to secure access to our New Citrix environment
- Solution: I configured and implemented a new Cisco ASA fire with a DMZ to provide a secure landing spot for our New Citrix Infrastructure (see NetScaler below).
- Goal: Provide mobile access to User Environments
- Solution: We implemented a Citrix NetScaler VPX initially and then moved to a MPX into our DMZ and configured it to deliver Desktop/User Environments to users externally. Users can quickly connect anywhere with a modern HTML 5 browser, or can load up the Receiver app for a more robust experience (printing). The NetScaler is stretched to our Disaster Recovery site (GSLB) to provide access in the event of a disruption.
Upgraded Linux Hosting Environment to Load Balanced,HA systems
- Overview: We wanted to upgrade our existing Linux/LAMP cloud hosting platform. Our existing platform served us well. Our collection of scripts to manage WordPress across development and production environments had matured enough to allow simple and consistent management of resources. We waned a more robust solution
- Solution: We decided to implement a Load balanced front end pointing to a master MYSql backend. The scripts that we relied on, needed to be re-architected as they assumed a local SQL server. This was pretty easy to do since I had moved the database interactions to its own script. Within a week, we started migrating sites off the old platform to the new. Now we have the ability to add more servers quickly if high traffic volume is expected, and we can take nodes offline to service if necessary. In addition I improved the existing scripts to allow for easy creation of a staging server. These scripts copy production files and databases locally, creating an identical snapshot of the production WordPress environment. Developers can test changes on an identical environment with a quick hosts file change (gasmask)
Brought up two new office locations, Managed Hurricane Sandy DR failover, and Moved DR location to across the country
- Overview: We added two new offices, Sandy happened, and we decided to move our DR site. While these aren’t the sexy projects, they are projects that illustrate the completeness and functionality of the solutions that we have implemented throughout our environment.
- Two new offices: Our company was expanding, we needed to bring up the new sites. Armed with just 2 USB thumb drives (one for VMware/VMs, and other for Workstation SCCM OSDs) I brought up an office with in two days. Racked a server, Installed VMware, used a Windows Server SCCM OSD to build a DC and an App server. Then used a workstation SCCM OSD to install all the workstations. The processes that we put into place years ago with SCCM 2007 worked so consistently and smoothly, that I was able to bring up the quickly. The experience I had gained using Cisco ASA IPsec tunnels connect sites to Azure, allowed me to quickly connect the new office to our existing infrastructure.
- Sandy: We lost our main site when Sandy flew through. We did not know the condition of the location, as no one could get across a bridge. All we knew was the main site was down. I was the only one that had internet connectivity, and cell phone was sketchy at best. Since I had implemented the email flow, and knew that the D.R. solution for mail should work, I failed the environment over, and we had mail flowing to our users within a hour. I add this to my project lists to show how we created an easy to mange solution that allows one person to conduct the whole process.
- Moved DR site: Sandy taught us that our Disaster Recovery site was in a flood plane and that it was too close to our office! I organized the de-racking and shipping of the equipment to a different remote site, and again, armed with two USB sticks, brought up the new site adding additional capacity. Cisco IPsec tunnels and New VMwareServer/VMs were put into place with only one person.
Evaluated MDM providers, and rolled out AirWatch to Enterprise
- Overview: Evaluate, compared, and implemented a MDM solution
- Goal: The goal was to find the right product for us.
- Solution: We evaluated a couple of different vendors. We knew from using Apple’s configurator, that the majority of what you can manage is limited to what the phones allow you to do. most MDM platforms did the same thing. Our decision came down to the product that provided the best user experience. We ended up selecting AirWatch. This solution has worked well for us. Rollout was smooth – all we did was provide instructions, and if people that had problems, they could contact us. Our users were pretty self sufficient, and the project completed quickly.
Upgraded SCCM to 2012 with OS Build Automation (SCCM)
- Overview: We wanted to move to SCCM 2012 in order to deploy Windows 10.
- Goal: Automated Gold Master Build Process
- Solution: As windows 7 got older, and since there weren’t going to be any new service packs, our Operating System Deployment (OSD) process kept taking longer and longer – due to all the patches released since the last service pack. I wanted to automate the gold mast creation process so that we could have an up-to-date image to deploy. I wrote a PowerShell script that boots a VM to PXE, runs a mandatory task sequence. The task sequence installs the RTM version of Windows 10, Office 2010 (soon to be replace with 2016), and all the “secondary apps” (Flash, Reader, PowerShell SOE, etc.) The sequence continues by patching and rebooting the image and finishes by capturing the image into a know location (and updates the distribution points). This know location is referenced by the deployment OSD Task Sequence that is used to image the workstation.
- Goal: Improve the existing SCCM PowerShell scripts
- Solution: The scripts we used in SCCM 2007 (here) worked great, but with the SCCM 2012 and its improved PowerShell interface, I wanted to re-architect the existing scripts, expanding them to leverage SCCM’s new capabilities. These new scripts can now create new packages and distribute them out to workstations with one step (during a maintenance window, of course). The scripts continue to evolve and now their new functionality make possible for a SCCM newbie to push out updated packages.
Created a PowerShell Standard Operating Environment (PowerShell)
- Goal: The goal was to have one location for all custom signed PowerShell scripts.
- Solution: I like to keep things organized, and I like to write “wrapper” automation scripts (SysAdmin model: If I have to do it more than once, I should script it!). The other members of the team come to rely on my scripts, so I had create a way to distribute updates to them. As I update/add features to the scripts, we have common/standard operating environment that keeps things organized, from the update process, to the signing, to the distribution of the scripts. This continues to grow as we move into new projects, like Citrix and Azure.
Implemented Microsoft CRM 2011 (MSCRM)
- Overview: At first, there was one request: “Who have we worked with in the past on these types of projects” (and by we, I mean the owners of the company). Then: “Who do we know in this industry”, “We would like to send an email announcing a new office opening to everyone we have worked with in this area”, “We want to send an invitation to a party we are holding”. These types of requests kept coming in. We needed a CRM. Even if we didn’t use all the features of the CRM, we could leverage its relationship capabilities to connect people to companies and eventually projects (and their project types).
- Goal: Research, Design, Implement, and Implement a CRM Solution.
- Solution: We evaluated Microsoft CRM, SalesForce, and SugarCRM (I always like to look for OpenSource solutions in my evaluations!) Since we already had MS SQL infrastructure, and everything else in our shop is Microsoft, it made sense to stick with Microsoft. Any easy to understand REST API sealed the deal.
- Goal: Keep costs down. Limit amount of training for Users. I know these are common goals for a project, but we wanted to ease into the idea of a CRM.
- Solution: First, we decided only a few people that required a full MSCRM license. These users would scrub the data entered into the database. Everyone else would see the data presented in SharePoint. They would use the MSCRM Limited license that allows access through the API.
- Goal: Integrate CRM into our SharePoint 2010 intranet and it’s forms.
- Solution A: One of the issues the we came across when using CRM data in SharePoint, was making cross-domain requests. JavaScript/JQuery does not allow cross-domain requests, so we need a workaround. I found code that described a proxy service that ran in ASP.NET. I modified it to meet our needs, and uploaded the solution. Now when I want to use JavaScript to pull data from CRM (or any other REST endpoint), I pass it to the Proxy, and the server side code makes the request and returns the result back to the Client Side Script. This has worked extremely well. I believe this is built into SharePoint 2013 now.
- Solution B: The proxy gave us the ability to mashup CRM data with SharePoint data. All we needed to do is put code in our CodeLibrary (see SP 2010 and the AdditionalPageHead delegate control) and we could provide AutoComplete suggestions for SharePoint forms throughout the site. Or build a simple JavaScript CRM lookup app. Or build a form to add a new contact to the CRM database. Or build a form that collects users that are generating new business opportunities.This form would take the data from the user and create MSCRM Activates that are linked to the user, the contact they are meeting with, and a description of when and what happened. This allowed us to leverage the relationship capabilities of the CRM, but now have to train users on how to use MSCRM.
- Goal: Improve the Mailing/Marketing list management process
- Solution: We wanted to improve the way the firm handles mailing lists. Previous Excel sheets were not practical. An application was envisioned that a user could pick a company, and then select the employee in that company, rinse and repeat, and in the end, a set of labels would appear on their desk. In a couple of days I put together a single-page application (SPA) using the knockout.js framework to interact with MS CRM (through the proxy listed in Solution A). It pulls companies and their employees to display checkboxes for the user to select. Upon selection, a CRM Note was added to the employee’s CRM record. A dynamic marketing list was created based on the existence of these notes, and mailing labels could quickly exported and printed-out. This SPA has been in use for 3 years, and has not had to be changed. it is simple and just works.
- Solution: We wanted to improve the way the firm handles mailing lists. Previous Excel sheets were not practical. An application was envisioned that a user could pick a company, and then select the employee in that company, rinse and repeat, and in the end, a set of labels would appear on their desk. In a couple of days I put together a single-page application (SPA) using the knockout.js framework to interact with MS CRM (through the proxy listed in Solution A). It pulls companies and their employees to display checkboxes for the user to select. Upon selection, a CRM Note was added to the employee’s CRM record. A dynamic marketing list was created based on the existence of these notes, and mailing labels could quickly exported and printed-out. This SPA has been in use for 3 years, and has not had to be changed. it is simple and just works.
Migrated from SharePoint 2007 to 2010 (SharePoint)
- Overview: This was more than a simple SharePoint upgrade. Yes, we wanted to upgrade the backbone – new OS, new SQL version, but we also wanted to redesign how things were done. Below are some of the unique goals of this migration
- Goal: Centralize JavaScript and CSS code throughout our SharePoint Environment. In the past many of our SharePoint pages had CSS and JavaScript code hacks in them. We wanted to centralize them so that all the code is in one place, a single place where anyone that knows JavaScript can edit the code with having to recompile a Visual Studio solution.
- Solution: We used the SharePoint AdditionalPageHead delegate control feature to add logic to every page, that would include code out of a centralized script library. A technical blog post about how we did this, can be found here
- Goal: To easily recover or recreate our SharePoint environment in a DR or Development scenario. We wanted to be able to quickly use a backup and restore it as production or as a development site. In the event of a disaster, we could bring the site back quickly. We also could user the same backup and create a fresh development site that was as current as the last backup current.
- Solution: PowerShell to the rescue. I wrote a script that looks for the most recent backup, prompts the admin about the removal of existing site if it exists, and the restores the DB, mounts it and re-adds the Solutions (WSPs). Now, a fresh dev site can be created in 3-5 mins.
Exchange 2003 to 2010 Migration (Exchange)
- Overview: The project was to migrate a cluster of 2 Exchange 2003 servers on Windows 2003 to a new Exchange 2010 infrastructure. Below is how we went about the migration
- Phase 1: Architecture evaluation
- Server architecture: We wanted to keep two servers in our main site, and an offsite replica of all the systems and the data. In Exchange 2003 we achieved this with a MS cluster, and DoubleTake to our off site location. In addition,we wanted to move from 2 physical boxes in our main location, to two virtualized boxes. Our architecture ended up being 3 VMs on 3 servers, with all roles and a copy of each MailBox database on each VM.
- High availability architecture: Since you can not have a DAG and Windows NLB on the same VM, we evaluated LoadBalancers from Kemp. They were less expensive, well supported, and quick to install. We have been happy with our purchase.
- Offsite Replication architecture: I have to say I have been impressed with 2010’s DAG implementation. It quickly became apparent that we would no longer need DoubleTake. All we would need to do is build a VM, install all 3 roles, and add the VM to the DAG.
- Load distribution architecture: One concern was that Exchange 2010 eliminated single instance storage. We did not know how that would impact server overhead and storage. Since we where moving from an Exchange 2003 Active/Passive configuration, to two servers, each with their own MailBox databases, we realized that we would be doubling our server power as the mailboxes would be split across two servers (duh). LoadBalancers would also spread the remaining load across both of the servers.
- Phase 2: Service disruption testing (offline)
- Create offline testing environment: First thing we did was to create an “off line replica of our existing environment. This turned out to be quite easy. We took a snapshot of a domain controller and brought it up offline (this was an offline network not at all attached to the physical network) along with a workstation with outlook installed. Exchange 2003 was re-installed in the offline network, and the data from one of the MailStores was restored. Quickly we installed Exchange 2010, and verified the resorted mail was accessible.
- Disruption testing: The following test were performed offline:
Scenario: Mail routing after adding new 2010 servers :: mail was flowing just fine.
Scenario: Patching :: How do we handle patching? We came up with a work-flow = MB:Move active MailboxDatabases, reboot MB server
Scenario: MB server fail :: We powered off a server, what would happen? Services automatically failed over MailboxDatabaseses
Scenario: CAS/HT server fail :: Shared CAS/HT address remained available
Scenario: Mailbox moves: What would this look like?- 2003 –> 2010 : User is prompted to reopen outlook, and Outlook re-homes to the CAS array address – Tested and worked as expected
- 2010 MailboxDatabaseses -> 2010 MailboxDatabaseses :: Seamless to user
- Phase 3:Requirements for implementation
- Must haves: New Servers, New LoadBalancers, ESX licenses, RIM upgrades to most recent versions, SAN certificates
- Nice to haves: Second EqualLogic to spread the mailboxes across more spindles, 2008 Domain Controllers.
- Phase 4: Implementation: Below are some of the things that we took into consideration when it was time to implement.
- Builds: Server build, DAG creation, Implement Datacenter Activation Coordination (DAC), CAS/HT Array creation, InterOP routing, Test Mailbox moves
- Repeat service disruption testing (phase 2 above) in online environment. Including testing of Kemp LoadBalancers
- Backup: Upgrade Backup software and purchase required agents. Implement SAN level snapshots via EqualLogics.
- Certificate implementation = SAN cert, NAT changes at the firewall (possible downtime)?
- Test mailflow to Blackberry/BES users
- Begin user mailbox moves = communications to user = New web mail address, and New iOS & android configuration instructions, and New Entourage configuration instructions
- inbound mail re-pointing, outbound mail re-pointing
- Internal server re-pointing = point internal servers, MFDs, SharePoint, RedHat installs, UPSs, to new SMTP
- Update Monitoring systems (Nagios for us)
- Decommission cluster, last exchange node.
Linux cloud server site hosting platform (Linux)
- Overview: My current employer was hosting custom php websites on a windows server. Very few were database driven and staff members were using DreamWeaver to update these website.
- Phase 1: Get thee to Linux. Our developers were developing on LAMP, so we needed to build a platform that supported what they were developing. We needed to establish a standard LAMP platform.
- First, I created a Standard Operating Environment(SOE)(that is what I called it). Our SOE consisted of :
- PXE/Kickstart system to deploy CentOS/RedHat to VMs hosted on VMware ESX servers
- Common set of scripts that lived on every server (examples: here, here, here)
- Custom “wrapper scripts” for every aspect of hosting LAMP based sites. I called them wrapper scripts, because they collect inputs and then pass pass the correct values along to the actual commands.
- Evaluated Cloud Based hosting providers.
- We had experience with co-locating at Rackspace, and I personally had used Slicehost (before they were bought by Rackspace) Rackspace’s price was reasonable, and they have “fanatical support”.
- First, I created a Standard Operating Environment(SOE)(that is what I called it). Our SOE consisted of :
- Phase 2: Move from custom built PHP sites to a Content Management System(CMS)
- Staff members needed to concentrate on creating content, not knowing html and using DreamWeaver
- CMS systems were evaluated and WordPress was selected as standard CMS. The reasons we picked WordPress: lightweight, great community, our developers knew it, out of the box LAMP worked.
- Workflow needed to be created to develop on a dev server and move sites to production when they are redy to go live.
- Phase NEXT:
- RealTime Redundancy: Master/Save mySQL and rsynced WordPress “binaries”
- “Off line” QA updating of Plugins, themes and WordPress updates.
Systems Center Configuration Manager (SCCM)
- Overview: My current employer did not have a method to update software or (re)deploy workstations. A small staff and a growing organization screamed for SCCM. Windows XP and Office 2003 were to be replaced by Windows 7 and Office 2010.
- Phase 1: Implement SCCM 2007
- Setup and Install SCCM 2007 in the organization and push SCCM Client to XP machines
- Developed dynamic queries to identify stale software
- Customized install methods to get software out in a controlled manner.
- Came up with workflows for uninstalling and then upgrading difficult software (Java!!!)
- Created a Task sequence to install all “secondary applications”. This is updated continually to reference the most recent packages. We moved to this scenario rather than having to update the OSD offline media every time a new version of Flash came out.
- Phase 2: Operating System Deployment
- Created an Operating System Deployment (OSD) environments using PXE. The thinking was that first get it working with PXE. Once that has been mastered, look into standalone media (USB flash devices) to speed up deployment.
- Custom WinPE wims: One thing that really helped us is that we launch a VNC server when booting into WinPE This way we can instruct a user to “press F12 -> select Boot to USB -> Click Next” and then we get an email with the address of the VNC server running inside the WinPE environment.
- Custom HTML Applications (HTAs): To ease migrations, custom forms were created to prompt for computer name, and office version to install. These HTAs set variables that were used during the SCCM OSD.
- Custom Office and Windows 7 Deployments: Office was customized to set Outlook in offline mode, hardcode the serial number, and set the organization’s values. Windows was customized to set the Initial HomePage,
- Phase NEXT:
- SCCM 2012
- Migration of scripts to PowerShell
- Automate and make things faster and easier!
SharePoint 2007
- Overview: My employer was building a custom application that needed to be deployed in SharePoint 2007. I had implemented a SharePoint 2003 environment in my pervious job, but not SharePoint 2007.
- Milestones:
- I went to developer training, an learned about developing Solutions and Features. Though I was not a SharePoint developer, this training was key to our roll out of SharePoint.
- Using Features, I deployed SharePoint customizations, across all the sites that we were setting up. Customizations like:
- A jQuery delegate control
- A custom theme/CSS
- A Custom Staff Directory application
- Rolled out 2 custom application embedded in SharePoint (one by an external developers, and one that I upgraded fro .NET 1.1 and deployed inside SharePoint).
- Hacks! I view SharePoint as a “framework”, one that needs massaging. I tend to use css and jQuery to hack up share point. Some of my more interesting “hacks” are: