Author Archive | jbmurphy

My script/procedure to move Hyper-V VMs to Azure

We have been moving resources from ESXi to Hyper-V to Azure. ESXi to Hyper-V is done via the Microsoft Virtual Machine Converter (MVMC). Here is the Checklist/Script/Procedure I have been using to get Hyper-V to Azure.

  1. Once machine is in Hyper-V, make sure the VMs HDs are VHD and not VHDX
  2. Make sure DHCP is set on the VM
  3. Make sure RDP is enabled (ours is set via group policy)
  4. Power down VM
  5. Run the PowerShell below to add the HD (Add-AzurermVhd), and create a new VM in Azure:
Login-AzureRmAccount
$VMName="NAMEOFMACHINE"
$DestinationVMSize="Standard_A1"
$DestinationAvailabilitySet="AvailabilitySetName"
$PrivateIpAddress="192.168.5.55"
$ResourceGroupName="YourResourceGroup"
$DestinationNetworkName="YourNetwork"
$DestinationNetworkSubnet="YourLanSubnet"
$Location="East US2"
$OSType="Windows"
[switch]$DataDisk=$false
$SourceSystemLocalFilePath="C:\PathToYour\VHDs\$($VMName)-System.vhd"
$SourceDataLocalFilePath="C:\PathToYour\VHDs\$($VMName)-Data.vhd"
$DestinationStorageAccountName="yourstorageaccount"
$DestinationSystemDiskUri= "http://$DestinationStorageAccountName.blob.core.windows.net/vhds/$VMName-System.vhd"
$DestinationDataDiskUri= "http://$DestinationStorageAccountName.blob.core.windows.net/vhds/$VMName-Data.vhd"
$DestinationSystemDiskName="$($VMNAME)_SYSTEM.vhd"
$DestinationDataDiskName="$($VMNAME)_DATA01.vhd"
 
Add-AzurermVhd -Destination $DestinationSystemDiskUri -LocalFilePath $SourceSystemLocalFilePath -ResourceGroupName $ResourceGroupName
if ($DataDisk){
Add-AzurermVhd -Destination $DestinationDataDiskUri -LocalFilePath $SourceDataLocalFilePath -ResourceGroupName $ResourceGroupName
}
 
#region Build New VM
$DestinationVM = New-AzureRmVMConfig -vmName $vmName -vmSize $DestinationVMSize -AvailabilitySetId $(Get-AzureRmAvailabilitySet -ResourceGroupName $ResourceGroupName -Name $DestinationAvailabilitySet).Id
$nicName="$($VMName)_NIC01" 
$vnet = Get-AzureRmVirtualNetwork -Name $DestinationNetworkName -ResourceGroupName $ResourceGroupName
$subnet = $vnet.Subnets | where {$_.Name -eq $DestinationNetworkSubnet}
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $ResourceGroupName -Location $Location -SubnetId $Subnet.Id -PrivateIpAddress $PrivateIpAddress
$DestinationVM = Add-AzureRmVMNetworkInterface -VM $DestinationVM -Id $nic.Id
$DestinationSystemDiskUri = $DestinationSystemDiskUri
$DestinationDataDiskUri = $DestinationDataDiskUri
 
If ($OSType -eq "Windows"){
$DestinationVM = Set-AzureRmVMOSDisk -VM $DestinationVM -Name $DestinationSystemDiskName -VhdUri $DestinationSystemDiskUri -Windows -CreateOption attach
if ($DataDisk){
$DestinationVM = Add-AzureRmVMDataDisk -VM $DestinationVM -Name $DestinationDataDiskName -VhdUri $DestinationDataDiskUri -CreateOption attach -DiskSizeInGB $DatDiskSize
}
}
 
New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $Location -VM $DestinationVM

The most important part is to use “-attach” with “Set-AzureRmVMOSDisk”

Hope that helps someone.

0

Using Let’s Encrypt, cerbot-auto with Apache on CentOS 6

There are plenty of better documented examples out there, so this is more of a note to self.

cd /opt
mkdir YourDir
cd YourDir/
wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto

/certbot-auto --apache certonly -d www.FirstDomain.com -d FirstDomain.com -d www.SecondDoamin.com -d SecondDoamin.com -d www.ThirdDoamin.com -d ThirdDoamin.com -d www.FourthDomain.com -d FourthDomain.com

The name on the cert will be the first domain you list int he command above. All the other names will be part of the SAN cert.

And to renew, cron this up:
/opt/YourDir/certbot-auto renew

0

Using ADFS for authenticating apache hosted sites

I have been learning ADFS/SAML on the fly. If you come across this, and you see that I am doing it all wrong, then let me know!

I wanted to use my existing ADFS infrastructure to authenticate an apache resource on CentOS 6. Below is what I figured out
(There are alot of steps).

First, your site has to have HTTPS enabled.

Second, install Shibboleth: add it to your repos, yum install it, enable it, and start it.

wget http://download.opensuse.org/repositories/security://shibboleth/CentOS_CentOS-6/security:shibboleth.repo -P /etc/yum.repos.d
yum install shibboleth
chkconfig shibd on
service shibd start

This will include the “/etc/httpd/conf.d/shib.conf” file that defines the apache paths to the shibd service (and enables the module).

Next, I needed to edit the /etc/shibboleth/shibboleth2.xml file

Change:
<ApplicationDefaults entityID="https://sp.example.org/shibboleth" REMOTE_USER="eppn persistent-id targeted-id">
To:
<ApplicationDefaults entityID="https://www.SiteYouWantToProtect.com/shibboleth" REMOTE_USER="eppn persistent-id targeted-id">

And

Change:
<SSO entityID="https://idp.example.org/idp/shibboleth" discoveryProtocol="SAMLDS" discoveryURL="https://ds.example.org/DS/WAYF">
 SAML2 SAML1
</SSO>
To:
<SSO entityID="http://your.sitename.com/adfs/services/trust" discoveryProtocol="SAMLDS" discoveryURL="https://ds.example.org/DS/WAYF">
 SAML2 SAML1
</SSO>

At this point, I ran into trouble. Normally, it looks like you continue editing /etc/shibboleth/shibboleth2.xml config file and you setup the metadata provider to point to your site like this:

<MetadataProvider type="XML" uri="https://your.sitename.com/FederationMetadata/2007-06/FederationMetadata.xml" backingFilePath="federation-metadata.xml" reloadInterval="7200">

But I kept getting errors when I re-started shibd (service shibd restart). Seems that shibboleth and ADFS don’t speak the same language.
This site talks about it, and the solution is to download the metadata document, modify it, store it locally, and finally point the /etc/shibboleth/shibboleth2.xml config file to the “pre processed” local metadata file.

I processed the metadata file in PowerShell with a script here. I put the PowerShell code in a file ADFS2Fed.ps1 file, changed the top variables to look like this:

$idpUrl="https://your.sitename.com";
$scope = "sitename.com";

Downloaded the xml file from “https://your.sitename.com/FederationMetadata/2007-06/FederationMetadata.xml” and saved it as federationmetadata.xml (in the same directory as ADFS2Fed.ps1) .

I ran the script ADFS2Fed.ps1, it found the downloaded metadata file “federationmetadata.xml”, pre-processed it, and spit out “federationmetadata.xmlForShibboleth.xml”

I uploaded this file to my /etc/shibboleth/ folder and named it “partner-metadata.xml”

I then uncommented the following line in the /etc/shibboleth/shibboleth2.xml

 <MetadataProvider type="XML" validate="true" file="partner-metadata.xml"/>

That took care of the metadata provider.

Next. I needed to add this to the bottom of the atribute-map.xml file . The UPN that ADFS was sending was being ignored by shibd.

<Attribute name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn" nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified" id="upn" />

Next, I needed to allow Shibboleth to work with SELinux (source):

Create a file named mod_shib-to-shibd.tewith :

module mod_shib-to-shibd 1.0;
require {
       type var_run_t;
       type httpd_t;
       type initrc_t;
       class sock_file write;
       class unix_stream_socket connectto;
}
#============= httpd_t ==============
allow httpd_t initrc_t:unix_stream_socket connectto;
allow httpd_t var_run_t:sock_file write;

Compile, package and load the module with the following 3 commands:

checkmodule -m -M -o mod_shib-to-shibd.mod mod_shib-to-shibd.te
semodule_package -o mod_shib-to-shibd.pp -m mod_shib-to-shibd.mod
semodule -i mod_shib-to-shibd.pp

Finally the last step on the apache/linux side is the set the apache virtual host to use shibboleth to authenticate.

        <Directory /var/www/dir/to/site>
          AllowOverride All
          AuthType Shibboleth
          ShibRequireSession On
          require valid-user
          ShibUseEnvironment On
          Order allow,deny
          Allow from all
        </Directory>

On the Windows/ADFS side:

  • In the ADFS Management Console, choose Add Relying Party Trust.
  • Select Import data about the relying party published online or on a local network and enter the URL for the SP Metadata (https://your.sitename.com/Shibboleth.sso/Metadata)
  • Continuing the wizard, select Permit all users to access this relying party.
  • In the Add Transform Claim Rule Wizard, select Pass Through or Filter an IncomingClaim.
  • Name the rule (for example, Pass Through UPN) and select the UPN Incoming claim type.
  • Click OK to apply the rule and finalize the setup.

I hope this helped someone. It took me a while to figure this out.
In summary,

  1. Use SSL
  2. Install shibd
  3. Edit /etc/shibboleth/shibboleth2.xml
  4. Process the metadata file
  5. edit /etc/shibboleth/shibboleth2.xml to point to the local processed metadata file
  6. modify atribute-map.xml
  7. Allow shidb to work with SELinux
  8. Tell Apache to use shibboleth
  9. Setup ADFS using the wizard
0

Problems with Citrix Receiver over VPN: ARGetNetworkLocationForStore returned NETWORK_LOCATION_NONE

I was working on my home lab, specifically setting up a Citrix XenDesktop environment. Since I didn’t have a Netscaler in place (yet), I connected to my home network via a Cisco AnyConnect VPN via a Mac.

While tunneling through the VPN connection, I could connect to the storefront and resources via HTML5, but I could never get the receiver client to connect – I could authenticate, but I couldn’t ever connect to the storefront (error:Citrix Receiver cannot connect to the server. Check your network connection.). I rebuilt the environment several times.

After some debugging of “Library/Logs/com.citrix.AuthManager.log” I figured it out. The error I was getting was:

CMacServiceRecordConnector::CallARGetNetworkLocationForStore url=https://storefront.domain.com/Citrix/Main/discovery
Thu Jul 28 14:33:09 2016     > T:00006A3F api    .   .   .   .   .   .   .   {
Thu Jul 28 14:33:09 2016     < T:00006A3F api    .   .   .   .   .   .   .   }
Thu Jul 28 14:33:09 2016       T:00006A3F api    .   .   .   .   .   .   .   Receiver status = success
Thu Jul 28 14:33:09 2016       T:00006A3F api    .   .   .   .   .   .   .   location=NETWORK_LOCATION_NONE
Thu Jul 28 14:33:09 2016 <<<<< T:00006A3F api    .   .   .   .   .   .   .   Throwable created: CHttpException: ARGetNetworkLocationForStore returned NETWORK_LOCATION_NONE; server URL: 'https://storefront.domain.com/Citrix/Main/discovery'

---

Processing exception, type='HTTP exception' description='ARGetNetworkLocationForStore returned NETWORK_LOCATION_NONE; server URL: 'https://https://storefront.domain.com/Citrix/Main/discovery''

The “location=NETWORK_LOCATION_NONE” was the issue. Citrix receiver didn’t know if it was inside or out. I figured the issue was the beacons, but setting them to obvious settings did not fix the issue.

It wasn’t until I set the internal beacon of the storefront to an IP address rather than a DNS name, did I get everything working.

My conclusion is that the Receiver client uses different DNS setting (most likely resolve.conf) than the browser. A browser (or any other networking app) on a mac uses the “scutil –dns” settings.

From here:

Note: AnyConnect does not change the resolv.conf file on Macintosh OS X, but rather changes OS X-specific DNS settings. 
Macintosh OS X keeps the resolv.conf file current for compatibility reasons. 
Use the scutil --dns command in order to view the DNS settings on Macintosh OS X.

I believe this is a bug in the way the receiver is programmed.

0

Connecting to the Salesforce REST api using PowerShell

As I said in my previous post, we are starting to use Salesforce, and I like REST APIs, so I wanted to see how to connect to Salesforce with cuRL and PowerShell.

cURL was pretty easy, PowerShell was not so much. The biggest issue was that when I queried the standard “https://login.salesforce.com/services/oauth2/token” url, I would get one response back, but if I tried again, it wouldn’t work. I had to install fiddler to figure out what was going on. I finally found the error and this solution: use your instance ID in the URL. That took me half a day to figure out. Add-on a typo of not having https in the URL, and I was not having fun. Once I figured out that you need to use your instance url and https I hit this error:

salesforce stronger security is required

So I had to figure out how to force Invoke-WebRequest or Invoke-RestMethod to use TLS 1.2. Here is the code that I finnanly figred out that gets an access token and queries accounts.


[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$tokenurl = "https://InstanceName-dev-ed.my.salesforce.com/services/oauth2/token"
$postParams = [ordered]@{
grant_type="password";
client_id="ReallyLongClientIDReallyLongClientIDReallyLongClientIDReallyLongClientIDReallyLongCli";
client_secret="1234567890123456789";
username="[email protected]";
password="PasswordAndTokenNoSpaces";
}

$access_token=(Invoke-RestMethod -Uri $tokenurl -Method POST -Body $postParams).access_token

$url = "https://InstanceName-dev-ed.my.salesforce.com/services/data/v37.0/sobjects/Account"
Invoke-RestMethod -Uri $url -Headers @{Authorization = "Bearer " + $access_token}


you don’t need the [ordered] part of the hash table, i was just using it to troubleshoot.

4

Connecting to the Salesforce REST api using cURL

My company decided to use Salesforce. I have worked with Microsoft CRM, but not Salesforce yet. When learning about a new application, I like to see how I can access the data. PowerShell and cURL are the simplest way for me to understand how to connect to a REST api.

First step is getting an OAuth2 token. This is well documented, but being new to the platform, I needed to start from the beginning.

First you need to create an app. Copy the ID and the Secret (I am using lightning):

Setup Home -> Platform Tools -> Expand Apps -> Apps -> Connected Apps -> New. You can follow these directions

After it is created, you need to create a token for the user:

Top menu on upper right corner click on your “icon” -> Settings -> My Personal Information – Reset My Security Token.
Click reset and it will be emailed to you.

Here is the cURL command to make the connection and get the access_token:

response=$(curl -s https://InstanceName-dev-ed.my.salesforce.com/services/oauth2/token -d "grant_type=password" \
-d "client_id=ReallyLongClientIDReallyLongClientIDReallyLongClientIDReallyLongClientIDReallyLongCli" \
-d "client_secret=1234567890123456789" -d "[email protected]" \
-d "password=PasswordAndTokenNoSpaces")
ACCESS_TOKEN=$(echo $response | awk -F"," '{print $1}' | awk -F":" '{print $2}' | sed s/\"//g | tr -d ' ')

Things to note: I am using my instance ID in the url. There was mixed documentation as to use this address or the standard logon url. This worked, and with the PowerShell command in the next post it was necessary. Https is required, and the password is a mashup of user name and security token

And the code to pull some data using the token

curl -H "Authorization: Bearer $ACCESS_TOKEN" -H "X-PrettyPrint:1" "https://InstanceName-dev-ed.my.salesforce.com/services/data/v37.0/sobjects/Account"

This was pretty easy as there are many examples out there. PowerShell, not so much.

0

Hidden or UnDocumented Network Security Group (NSG) default rule in Azure (DNS)

I have been working to get a Citrix Netscaler up and running in Azure. It has not been easy, as all the documentation is for ASM.

Our network configuration has IPSec tunnels going from OnPrem to Azure, and I have created two SubNets in Azure – a DMZ and a LAN. The DMZ has the following Outbound NSG rules (ACLs) for the NetScaler to talk to a LAN SubNet.

Get-AzureRmNetworkSecurityGroup -ResourceGroupName ResourceGroupName | Select SecurityRules -ExpandProperty SecurityRules | where {$_.Direction -eq "Outbound"} | Select Priority,Name,Protocol,SourceAddressPrefix,SourcePortRange,DestinationAddressPrefix,DestinationPortRange,Access | Sort-Object Priority|ft -AutoSize

DMZ Netscaler = 192.10.8.100
LAN DC = 192.10.9.10

Priority Name                           Protocol SourceAddressPrefix SourcePortRange DestinationAddressPrefix DestinationPortRange Access
-------- ----                           -------- ------------------- --------------- ------------------------ -------------------- ------
     101 LDAP_From_NSIP                 TCP      192.10.8.100        *               192.10.9.10              389                  Allow
     102 DNSUDP_From_NSIP               Udp      192.10.8.100        *               192.10.9.10              53                   Allow
     103 DNSTCP_From_NSIP               TCP      192.10.8.100        *               192.10.9.10              53                   Allow
     104 RADIUS_From_NSIP               Udp      192.10.8.100        *               192.10.9.10              1812                 Allow
    4095 Subnet_To_Internet             *        *                   *               Internet                 *                    Allow
    4096 Deny_All_Outbound              *        *                   *               *                        *                    Deny

As you can see, I add a DenyAll at the end even though there is one in the DefaultSecurityRules. I just like to see it there. I find it comforting.

I found that from then Netscaler, I could do a DNS lookup against my OnPrem DC. How can that be?
Rule 101-104 are only for the Azure LAN DC. Then I DenyAll with 4096.
How can the Netscaler look up via the OnPrem DC?
I am DenyingAll!
I was pulling my hair out.

I realized that I had never changed my DNS server settings for my Virtual Network in Azure (I needed it to join the domain for the local DC when I build it!). I forgot to switch it the local Azure LAN DC.

Therefore, even though there is a DenyAll in my NSG rules, there has to be a Hidden or UnDocumented rule that allows queries to the DNS servers listed in the Virtual Network settings.

As soon as I changed the DNS server settings to the local Azure LAN DC, I could no longer query the OnPrem DC.

I understand why it is there. If you put in a DenyAll (like I did), Windows Servers will panic. They do not like it if they can’t access a DNS server.

I think Azure needs to move the DNS server settings down to the SubNet level, since all VMs are DHCP (Reservations). If they do this, a DMZ and LAN can use different DNS server settings, or none at all.

Just something I ran across today.

0

PowerShell to delete blobs in Azure

I was trying to delete a VHD in Azure via PowerShell, and I couldn’t find a good solution. Here is how you delete a blob in Azure

$resourceGroupName="Default"
$storageAccountname="StorageAccount01"
$storageAccountKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountname).Key1
$storageContext = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
$containerName="vhds"
 
# List blobs
Get-AzureStorageBlob -Container $containerName -Context $storageContext
 
# Remove Blob
Get-AzureStorageBlob -Container $containerName -Context $storageContext -Blob "SystemDisk01.vhd" | Remove-AzureStorageBlob
Get-AzureStorageBlob -Container $containerName -Context $storageContext -Blob "DataDisk01.vhd" Remove-AzureStorageBlob

Hope that helps someone.

Commands to manually remove VMWare Fusion from my mac

This is documented somewhere, but I can never find the cut and paste commands.

I am putting them here:


sudo rm -rf /Library/Application\ Support/VMware/
sudo rm -rf /Library/Application\ Support/VMware Fusion
sudo rm -rf /Library/Preferences/VMware\ Fusion

rm -rf ~/Library/Application\ Support/VMware Fusion
rm -rf ~/Library/Caches/com.vmware.fusion
rm -rf ~/Library/Preferences/VMware\ Fusion
rm -rf ~/Library/Preferences/com.vmware.fusion.LSSharedFileList.plist
rm -rf ~/Library/Preferences/com.vmware.fusion.LSSharedFileList.plist.lockfile
rm -rf ~/Library/Preferences/com.vmware.fusion.plist
rm -rf ~/Library/Preferences/com.vmware.fusion.plist.lockfile
rm -rf ~/Library/Preferences/com.vmware.fusionDaemon.plist
rm -rf ~/Library/Preferences/com.vmware.fusionDaemon.plist.lockfile
rm -rf ~/Library/Preferences/com.vmware.fusionStartMenu.plist
rm -rf ~/Library/Preferences/com.vmware.fusionStartMenu.plist.lockfile

My Azure ASM to ARM script

This is the “script” I used to move our older classic environment VMs to the new Azure Resource Manager.
It it is not a function – I wanted to step through the process and make sure all was well at the different points in the script.
The script assumes that there is only one Data disk (or none), and that you have created your availability set before hand.
I based most of the script off this.

I hope this helps some one.

Add-AzureAccount 
Login-AzureRmAccount 
$VMName="ASMVM01"
$ServiceName="ASMVM01_Service"
$SourceVMSize="Standard_A3"
$DestinationAvailabilitySet="AvailabilitySet01"
$PrivateIpAddress="192.168.1.10"
$ResourceGroupName="ResourceGroup01"
$DestinationNetworkName="Network01"
$DestinationNetworkSubnet="SubeNet01"
$Location="East US"
$OSType="Windows"
#$OSType="Linux"
[switch]$DataDisk=$false
$DatDiskSize=100
$SourceStorageAccountName="srcstorageaccount"
$DestinationStorageAccountName="dststorageaccount"

# ---- Edit above
#region Get Source Storage
$SourceStorageAccountKey=(Get-AzureStorageKey -StorageAccountName $SourceStorageAccountName).Primary
$SourceContext = New-AzureStorageContext -StorageAccountName $SourceStorageAccountName -StorageAccountKey $SourceStorageAccountKey
#endregion

#region Get Destination Storage
$DestinationAccountKey=(Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $DestinationStorageAccountName).Key1
$DestinationContext = New-AzureStorageContext -StorageAccountName $DestinationStorageAccountName -StorageAccountKey $DestinationAccountKey
#endregion

#region Get SourceVM
$SourceVM = Get-AzureVm  -ServiceName $ServiceName -Name $VMName
if (! $SourceVM.Status -eq "StoppedDeallocated"){
"You need to sopt $SourceVM first"
return;
}
#endregion

#region Copy SystemDisk
$SourceSystemDisk=Get-AzureDisk | Where-Object { $_.AttachedTo.RoleName -eq "$VMName" } | where {$_.OS -eq $OSType}
$DestinationSystemDiskName="$($VMNAME)_SYSTEM.vhd"
write-host "Copying System Disk"
Write-Host "Start-AzureStorageBlobCopy -Context $SourceContext -AbsoluteUri $($SourceSystemDisk.MediaLink.AbsoluteUri) -DestContainer ""vhds"" -DestBlob $DestinationSystemDiskName -DestContext $DestinationContext -Verbose"
$SystemBlob = Start-AzureStorageBlobCopy -Context $SourceContext -AbsoluteUri $($SourceSystemDisk.MediaLink.AbsoluteUri) -DestContainer "vhds" -DestBlob $DestinationSystemDiskName -DestContext $DestinationContext -Verbose 
$SystemBlob | Get-AzureStorageBlobCopyState
While ($($SystemBlob | Get-AzureStorageBlobCopyState).Status -ne "Success"){
sleep 5
$BlobCopyStatus=$SystemBlob | Get-AzureStorageBlobCopyState
"$($($BlobCopyStatus).Status) ($($BlobCopyStatus).BytesCopied) of $($($BlobCopyStatus).TotalBytes) bytes)"
}
#endregion

#region Copy Data Disk
if ($DataDisk){
$SourceDataDisk=Get-AzureDisk | Where-Object { $_.AttachedTo.RoleName -eq "$VMName" } | where {! $_.OS}
$DestinationDataDiskName="$($VMNAME)_DATA01.vhd"
write-host "Copying Data disk"
Write-Host "Start-AzureStorageBlobCopy -Context $SourceContext -AbsoluteUri $($SourceDataDisk.MediaLink.AbsoluteUri) -DestContainer ""vhds"" -DestBlob $DestinationDataDiskName -DestContext $DestinationContext -Verbose"
$DataDiskBlob = Start-AzureStorageBlobCopy -Context $SourceContext -AbsoluteUri $($SourceDataDisk.MediaLink.AbsoluteUri) -DestContainer "vhds" -DestBlob $DestinationDataDiskName -DestContext $DestinationContext -Verbose 
$DataDiskBlob | Get-AzureStorageBlobCopyState
While ($($DataDiskBlob | Get-AzureStorageBlobCopyState).Status -ne "Success"){
sleep 5
$BlobCopyStatus=$DataDiskBlob | Get-AzureStorageBlobCopyState
"$($($BlobCopyStatus).Status) ($($BlobCopyStatus).BytesCopied) of $($($BlobCopyStatus).TotalBytes) bytes)"
}
}
#endregion

#region Build New VM
$DestinationVM = New-AzureRmVMConfig -vmName $vmName -vmSize $SourceVMSize -AvailabilitySetId $(Get-AzureRmAvailabilitySet -ResourceGroupName $ResourceGroupName -Name $DestinationAvailabilitySet).Id
$nicName="$($VMName)_NIC01"
$vnet = Get-AzureRmVirtualNetwork -Name $DestinationNetworkName -ResourceGroupName $ResourceGroupName 
$subnet = $vnet.Subnets | where {$_.Name -eq $DestinationNetworkSubnet}
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $ResourceGroupName -Location $Location -SubnetId $Subnet.Id -PrivateIpAddress $PrivateIpAddress
$DestinationVM = Add-AzureRmVMNetworkInterface -VM $DestinationVM -Id $nic.Id 
$DestinationSystemDiskUri = "$($DestinationContext.BlobEndPoint)vhds/$DestinationSystemDiskName"
$DestinationDataDiskUri = "$($DestinationContext.BlobEndPoint)vhds/$DestinationDataDiskName"

If ($OSType -eq "Windows"){
$DestinationVM = Set-AzureRmVMOSDisk -VM $DestinationVM -Name $DestinationSystemDiskName -VhdUri $DestinationSystemDiskUri -Windows -CreateOption attach
if ($DataDisk){
$DestinationVM = Add-AzureRmVMDataDisk -VM $DestinationVM -Name $DestinationDataDiskName -VhdUri $DestinationDataDiskUri -CreateOption attach -DiskSizeInGB $DatDiskSize
}
}
If ($OSType -eq "Linux"){
$DestinationVM = Set-AzureRmVMOSDisk -VM $DestinationVM -Name $SourceSystemDisk -VhdUri $DestinationOSDiskUri -Linux -CreateOption attach
if ($DataDisk){
$DestinationVM = Add-AzureRmVMDataDisk -VM $DestinationVM -Name $DestinationDataDiskName -VhdUri $DestinationDataDiskUri -CreateOption attach -DiskSizeInGB $DatDiskSize
}
}
 
New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $Location -VM $DestinationVM
#endregion

Powered by WordPress. Designed by WooThemes