Quantcast
Channel: blog – Alexander Ollischer | Citrix | Microsoft
Viewing all 59 articles
Browse latest View live

Citrix License Server v12.x – How to renew an expired Apache Tomcat Server Certificate

$
0
0

Just recently I came across an expired Server Certificate on my Citrix License Server v12.x. As everybody might know, the Citrix License Server is based on an Apache Tomcat webserver running on your Windows Server. During installation a self-signed server certificate is being issued and bound to the Apache’s web server port 8083. So how are you supposed to renew the server certificate in case it has expired and you need secure access to the corresponding Citrix License Server features?

Usually you access the Citrix License Administration Console through the URL

  • http://localhost:8082 or
  • http://<FQDNofYourLicenseServer>:8082

As opposed to the unsecure connection you can access the Citrix Simple License Service through an secure connection via

  • https://<FQDNofYourLicenseServer>:8083

thus requiring a corresponding server certificate reflecting the License Server’s FQDN on the certificate.

As I use a PowerShell script for daily reports on my current license consumption and usage (which can be found here), I need secure access to the License Server’s service. The script uses the Citrix.Licensing.Admin.V1 snapin and the included cmdlets Get-LicCertificate.ps1 as well as Get-LicInventory.ps1 to retrieve all information required for this purpose. The PowerShell script
Get-CitrixLicenses.ps1 can be found here, courtesy of Clint McGuire‘s efforts.

The snapin Citrix.Licensing.Admin.V1 has to be installed manually and can be found in LicensingAdmin_PowerShellSnapIn_x64.msi under the \x64\Licensing folder of your XenDesktop 7.x installation medium:

license_server_01

After the server certificate has expired the script stopped working. As soon as the script issued the cmdlet Get-LicInventory -AdminAddress https://<FQDNofYourLicenseServer>:8083/ I received an error stating:

Get-LicInventory : CertificateVerificationFailed

license_server_02

Keep in mind that the exact same error shows up in case you don’t trust the server certifcate’s issuer, which is usually the case in terms of a default installation.

Interlude

This error pointed me in another direction as well. In order to avoid any certificate related errors while running PowerShell cmdlets against the Citrix License Server’s secure address, you have to read the CertHash property first, then provide it to the Get-LicInventory cmdlet, e.g.:

$ServerAddress = "https://<FQDNofYourLicenseServer:8083"
$certhash = (Get-LicCertificate -AdminAddress $ServerAddress).certhash
$LicAll = Get-LicInventory -AdminAddress $ServerAddress -CertHash $certhash

Then you should receive the corresponding License Server information, e.g.:

license_server_03


LicenseProductName : XDT
LocalizedLicenseProductName : Citrix XenDesktop Platinum
LicenseEdition : PLT
LicenseLocalizedEdition :
LicenseSubscriptionAdvantageDate : 2014:0826
LicenseType : Retail
LocalizedLicenseType : Retail
LicensesInUse : 44
LicensesAvailable : 44
LicenseOverdraft : 4
LicenseModel : UD
LocalizedLicenseModel : User/Device

LicenseProductName : XDT
LocalizedLicenseProductName : Citrix XenDesktop Platinum
LicenseEdition : PLT
LicenseLocalizedEdition :
LicenseSubscriptionAdvantageDate : 2016:0806
LicenseType : Retail
LocalizedLicenseType : Retail
LicensesInUse : 16
LicensesAvailable : 55
LicenseOverdraft : 5
LicenseModel : UD
LocalizedLicenseModel : User/Device

Ok then, back to the actual renewal of the server certifcate….. My methodology looked as follows:

  1. create a new server certificate including a private key
  2. export the server certificate to pfx file format
  3. extract server certificate and private key from pfx to separate files
  4. exchange the Apache Tomcat’s expired server certificate with newly created certificate

1. Request a new Server Authentication Certificate

As I sported a Microsoft PKI for internal use server certificates, I simply requested a new Server Authentication Certificate for my Citrix License Server on my Microsoft CA. You only have to ensure that you choose the correct certificate and mark the corresponding key as exportable. The Name should reflect the License Server’s FQDN:

license_server_04

Issue the newly requested server certificate on your CA, then import/install it onto your Citrix License Server:

license_server_05

license_server_06

license_server_07

2. Export the Server Certificate inluding Private Key

After importing you need to export the server certificate including the private key, i.e. to the pfx file format. To achieve this simply open a MMC an add the Certificates Snap-in pointing to your Certificates – Current User store. There you should find the recently installed server certificate. Right-click the corresponding certificate and hit All Tasks | Export…:

license_server_08

license_server_09

Now rinse and repeat as we need to export the server certificate once more, this time around in cer file format, i.e. without your private key and Base-64 encoded X.509 (.CER):

license_server_10

license_server_11

3. Extract Private Key and Server Certificate from PFX

In order to achieve this I utilized GnuWin32’s OpenSSL binaries for Windows. You can download them here. I downloaded and installed the Complete package, except sources. After installation you’ll find the required openssl.exe executable file in C:\Program Files (x86)\GnuWin32\bin. In order to extract both the certificate as well as the certificate’s key issue the following commands in an elevated command prompt:

openssl pkcs12 -in <ExportedCertificate>.pfx -nocerts -out key.pem -nodes
openssl rsa -in key.pem -out server.key

After issueing these commands you should at least have two files available:

  • key.pem
  • server.key

4. Import newly created server certificate to Apache Tomcat

The previously exported server.key certificate file can now be used to replace the expired Apache server certificate located at C:\Program Files (x86)\Citrix\Licensing\LS\conf … or … C:\Program Files (x86)\Citrix\Licensing\WebServicesForLicensing\Apache\conf … 

Subsequently you have to verify or adjust the configuration file httpd-ssl.conf which can be found in C:\Program Files (x86)\Citrix\Licensing\WebServicesForLicensing\Apache\conf\extra 

SSLCertificateFile "C:/program files (x86)/citrix/Licensing/WebServicesForLicensing/Apache/conf/server.crt"
SSLCertificateKeyFile "C:/program files (x86)/citrix/Licensing/WebServicesForLicensing/Apache/conf/server.key"

As a final step restart the corresponding Citrix Licensing services.

Navigate to URL https:<FQDNofCitrixLicenseServer>:8083 and verify whether any certificate errors appear:

5. Script

The final script looks as follows:

<#
.SYNOPSIS
Reports on Citrix licenses in use for selected products.
 
.DESCRIPTION
This script will query your Citrix Licnese server and output the in
use and total licenses for individual products.
 
.NOTES
Requires Citrix Licensing Snapin (Citrix.Licensing.Admin.V1)
If your License server has a self-signed cert you may get license
errors when running this.  I've resolved this in my test environments
by installing the cert as a Trusted Root CA Cert.
 
Source: http://www.clintmcguire.com/scripts/get-citrixlicenses/
Author: Clint McGuire
Version 1.0
Copyright 2013

Code edited by Alexander Ollischer
Source: https://blog.ollischer.com
Version 1.1
Copyright 2017
 
.EXAMPLES
PS> .\Get-CitrixLicenses.ps1
Using 71 Citrix XenApp Enterprise of 132 available.
 
#>
############################################################################################
#DEFINE THESE VARIABLES FOR YOUR ENVIRONMENT
 
#Enter the URL for your License server, typically this uses HTTPS and port 8083
#E.G. "https://licensingservername:8083"
$ServerAddress = "https://<LicenseServerName>.domain.local:8083"
 
#Enter the license type you would like to output, this can be a comma separated list, include 
#each option in single quotes
#E.G. 'Citrix XenApp Enterprise','Citrix XenDesktop Enterprise','Citrix EdgeSight for XenApp'
$LicenseTypeOutput = 'Citrix XenApp Enterprise','Citrix XenDesktop Enterprise','Citrix EdgeSight for XenApp','Citrix XenApp Platinum','Citrix XenDesktop Platinum'

############################################################################################
 
#Path to HTMLReport File
$FilePath = "C:\Scripts\LMC\LicAll.html"
$smtpsettings = @{
	To =  "recipient@domain.local"
	From = "sender@domain.local"
    Subject = "Citrix License Usage"
	SmtpServer = "IPorFQDNofYourMailServer"
	}
     
#Check for Licensing Snap-in, add if not currently added
#PowerShell Snap-in is contained in: LicensingAdmin_PowerShellSnapIn_x64.msi
if ( (Get-PSSnapin -Name Citrix.Licensing.Admin.V1 -ErrorAction SilentlyContinue) -eq $null )
{
    Add-PsSnapin Citrix.Licensing.Admin.V1
}
 
#Create Hash tables
$LicAll = @{}
$LicInUse = @{}
$LicAvailable = @{}
 
#Build License Display hash table 
$certhash = (Get-LicCertificate -adminaddress $ServerAddress).certhash
$LicAll = Get-LicInventory -AdminAddress $ServerAddress -CertHash $certhash
Get-LicInventory -AdminAddress $ServerAddress -CertHash $certhash | ConvertTo-HTML | Out-File $FilePath

ForEach ($LicInfo in $LicAll) 
{
    $Prod = $LicInfo.LocalizedLicenseProductName
    $InUse = $LicInfo.LicensesInUse
    $Avail = $LicInfo.LicensesAvailable
    if ($LicInUse.ContainsKey($Prod))
        {
                if ($LicInUse.Get_Item($Prod) -le $InUse) 
                {
                    $LicInUse.Set_Item($Prod, $InUse)
                }
        }
    else
        {
            $LicInUse.add($Prod, $InUse)
        }
    if ($LicAvailable.ContainsKey($Prod))
        {
                if ($LicAvailable.Get_Item($Prod) -le $Avail) 
                {
                    $LicAvailable.Set_Item($Prod, $Avail)
                }
        }
    else
        {
            $LicAvailable.add($Prod, $Avail)
        }
}
 
#Output license usage for each requested type.
Foreach ($Type in $LicenseTypeOutput)
{
    $OutPutLicInUse = $LicInUse.Get_Item($Type)
    $OutPutAvail = $LicAvailable.Get_Item($Type)
    Write-Host "Using" $OutPutLicInUse  $Type  "of" $OutPutAvail "available."
}

#Send Email Notification
$Output = (Get-Content $FilePath | Out-String)
Send-MailMessage @smtpsettings -Body $Output -BodyAsHtml -Encoding ([System.Text.Encoding]::UTF8)

 

Further reading:

The post Citrix License Server v12.x – How to renew an expired Apache Tomcat Server Certificate appeared first on blog - Alexander Ollischer | Citrix | Microsoft.


Windows Server 2016 ADFS v4.0 – The specified service account ‘CN=svc-ADFS-gMSA’ did not exist. Attempt to create the group Managed Service Account failed. Error: There is no such object on the server.

$
0
0

While running the Active Directory Federation Services Configuration Wizard for the first time on a newly installed Windows Server 2016, I ran into the following error after deciding to create the first federation server in a federation server farm, and creating a Group Managed Service Account (gMSA) as Service Account for my ADFS implementation:

The specified service account ‘CN=svc-ADFS-gMSA’ did not exist. Attempt to create the group Managed Service Account failed. Error: There is no such object on the server.

During troubleshooting it turned out that the underlying issue actually lay far deeper than expected…

At first all prerequisites checks passed successfully:

Prior to running the Configuration Wizard I succesfully created the Key Distribution Services KDS Root Key for the gMSA by executing:

Add-KdsRootKey -EffectiveTime (Get-Date).AddHours(-10)

What did I miss? Well, silly me, I simply did not provision the gMSA prior to running the Configuration Wizard, though the Configuration Wizard states, that it will create a gMSA if it does not already exist:

So, how do I successfully create a corresponding gMSA in order to make it work with AD FS?

You can provision a gMSA using the New-ADServiceAccount cmdlet, where domain.com ist the your own TLD:

New-ADServiceAccount -Name svc-ADFS-gMSA -DNSHostName sts.domain.com -KerberosEncryptionType RC4, AES128, AES256 -PrincipalsAllowedToRetrieveManagedPassword serv-adfs$ -ServicePrincipalNames http/sts.domain.com/domain.com, http/sts.domain.com/domain, http/sts/domaim.com, http/sts/domain -Path "CN=Users,DC=DOMAIN,DC=local"

Verify whether the gMSA has been successfully provisioned in the corresponding OU provided with the -Path parameter:

Verify whether all required Service Principal Names (SPN) have been registered and associated with the newly provisioned gMSA by executing the following command:

setspn /l svc-ADFS-gMSA

After running the Configuration Wizard again it still fails with an error stating:

The system cannot find the file specified

Doing a Little Research resulted in “the GMSA was moved from the Managed Service Accounts container in Active Directory” and “make sure you have a Managed service account group object in ADUC. (not an OU)”. Checking my Active Directory I realized that the required container Managed Service Accounts is actually missing! And I was unable to identify the reason why it was missing in the first place. Looking into my Deleted Objects container came up … empty.

So, how do I successfully re-create a corresponding Managed Service Accounts container in order to make it work with AD FS?

Further researching revealed I had to re-create a corresponding container with ADSIEdit named “Managed Service Accounts” and ensure ist security properties are correctly set to “Enable Inheritance”.

Then I deleted my previously provisioned gMSA:

After that I ran the AD FS Configuration Wizard again and it was completed successfully:

Checking the AD FS Service as well as the corresponding Event Log showed that all went well:

Further reading:

The post Windows Server 2016 ADFS v4.0 – The specified service account ‘CN=svc-ADFS-gMSA’ did not exist. Attempt to create the group Managed Service Account failed. Error: There is no such object on the server. appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Windows Server 2016 ADFS v4.0 – Certain (non-admin) Users Cannot Login – no error, just plain login mask

$
0
0

Just recently I happened to face a pretty annoying issue, that took me a couple of hours to solve. And by the end it was quite simple and obvious actually.

Long story short – newly implemented AD FS 4.0 farm, properly configured and already working. No funny claims rules, no additional authentication factors enforced, still login was working only for a couple of accounts with no obvious similarities or discrepancies. All users were created equal.

Testing was done by navigating to https://sts.domain.com/adfs/ls/idpinitiatedSignon.aspx and logging in with valid user credentials. For some users it worked and for others… well it did not. They were just thrown back to the login mask, no error message or anything.

Please note that the aforementioned endpoint is by default disabled in Windows Server 2016 ADFS 4.0. You have to enable it first by running:

Set-AdfsProperties -EnableIdPInitiatedSignonPage $true

Troubleshooting was quite a chore as no events were triggered and or logged in any of the available Event Logs or log/tracing files. When using bogus login credentials on purpose, e.g. wrong username or password, I received corresponding error messages of invalid username or password.

I use a managed service account (gMSA) for the ADFS service, as this is best practice and recommend by Microsoft.

As stated by Vasil Michev in his blog post (and answer to my problem):

Turns out, the service account was missing the required permissions. Simply giving the account Read access to the user account in question resolved the issue – the user was now able to properly use AD FS. As it turns out, the root cause was that the for whatever reason, the access entry for “Authenticated Users” was removed from the “Pre-Windows 2000 Compatible Access” group in the affected environments.

and

In addition, a very similar issue might occur if the application in question is not able to read the tokenGroupsGlobalAndUniversal attribute. To solve this, make sure that the service account is a member of the “Windows Authorization Access Group”.

Furthermore as stated by Aaron.H in the corresponding Microsoft Forums thread:

I discovered that the Pre-Windows 2000 Compatible Access group in my production domain did not have Authenticated Users as a member like the new lab domain did. Once I added Authenticated Users to the Pre-Windows 2000 group, I was able to authenticate using regular domain accounts to ADFS.

Another reply states:

I just added the service account (which was created manually rather than a managed service account) for adfs to the Pre-Windows 2000 group and that resolved my issue, so you don’t have to add the Authenticated Users group if you don’t want to.

And a final statement on this issue:

Don’t know if you guys solved it by now but i wanted to let you know the Pre-windows 2000 Group membership didn’t work for me. However when i added the ADFS group managed service account to the ‘ Windows Authorization Access Group’  it worked instantly. According to the description of this group (‘Members of this group have access to the computed tokenGroupsGlobalAndUniversal attribute on User objects’) this is expected behaviour.

There are a lot of reasons why correct permissions are not being assigned throughout the domain. The adminCount attribute, disabling inheritance, et al. So thanks to all guys involved in this issue and for clarifying things!

Further reading

The post Windows Server 2016 ADFS v4.0 – Certain (non-admin) Users Cannot Login – no error, just plain login mask appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Citrix XenDeskop 7.x – Printer auto-creation failure. Reason: AddPrinter() failed with status 0x7A

$
0
0

Today I came across an issue where printer auto creation failed for a couple of HP printers on one client computer only. The XenDesktop worker showed an Event ID 1106 stating Printer auto-creation failure. Reason: AddPrinter() failed with status 0x7A.

Searching for this particular error didn’t reveal anything helpful. So I had to start digging into it by myself.

Converting hex 7A into decimal 122 and using net helpmsg 122 does provide some additional information:

The data area passed to a system call is too small.

Searching for this error message revelead that there is an issue with the printer or the driver associated with it on the client. It looks like the printer drivers were not installed properly in the users PC. Re-installing the driver on the workstation solved the issue.

The post Citrix XenDeskop 7.x – Printer auto-creation failure. Reason: AddPrinter() failed with status 0x7A appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Hello world!

$
0
0

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Citrix XenDesktop 7.x – Citrix Session Printers are not visible via Control Panel, Devices And Printers

$
0
0

Citrix XenDesktop 7.x – Citrix Session Printers are not visible via Control Panel, Devices And Printers

Affected System: Windows Server 2012 R2 as well as Windows Server 2016

Citrix Environment: XenDesktop 7.15 LTSR CU2

In my case the solution came down to missing services running on my XenDesktop servers due to hardening the servers prior to provisioning them. I ran a couple of hardening scripts, which disabled a lot of so-called unnecessary services which resulted in the aforementioned issue. The affected services are:

  • Device Install Service (DeviceInstall)
  • Device Management Enrollment Service (DmEnrollmentSvc)
  • Device Setup Manager (DsmSvc)

They were disabled and I had to revert them to the default setting, i.e. 

  • Device Install Service (DeviceInstall) | Manual (Trigger Start)
  • Device Management Enrollment Service (DmEnrollmentSvc) | Manual 
  • Device Setup Manager (DsmSvc) | Manual (Trigger Start)

After resetting the services via Group Policy everything worked as expected.

Further reading:

The post Citrix XenDesktop 7.x – Citrix Session Printers are not visible via Control Panel, Devices And Printers appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Microsoft Exchange 2016 and IIS 8.5+ – Enable HTTP Strict Transport Security (HSTS)

$
0
0

As part of my Security Best Practices regarding Microsoft Exchange and Microsoft IIS I always implement a couple of configuration settings to harden the underlying IIS, e.g.

  • disabling the “X-AspNet-Version” header,
  • disabling deprecated and/or unsecure protocols,
  • disabling deprecated and/or unsecure ciphers,
  • setting up for SSL Perfect Forward Secrecy,
  • enabling TLS 1.2,
  • et al

In order to tighten your security on Exchange 2016’s IIS you should at least start with enabling HTTP Strict Transport Security (HSTS) which I’m going to describe here. As per Microsoft:

HTTP Strict Transport Security (HSTS), specified in RFC 6797, allows a website to declare itself as a secure host and to inform browsers that it should be contacted only through HTTPS connections. HSTS is an opt-in security enhancement that enforces HTTPS and significantly reduces the ability of man-in-the-middle type attacks to intercept requests and responses between servers and clients.

There are a couple of recommendations and Best Practices available that give further information on how to harden Windows Server 2012 R2/2016 as well as Exchange 2013/2016, respectively. In this article I will focus on HSTS only as I didn’t find any particular articles outlining this issue […]. Please check the Further reading section at the bottom of thids article for more information, e.g. IIS Crypto by Nartac Software:

IIS Crypto is a free tool that gives administrators the ability to enable or disable protocols, ciphers, hashes and key exchange algorithms on Windows Server 2008, 2012 and 2016. It also lets you reorder SSL/TLS cipher suites offered by IIS, implement best practices with a single click, create custom templates and test your website.

You could always have a look at Center for Internet Security’s (CIS) Benchmark Documents as well, e .g. for Microsoft Exchange Server 2016, which provides a “prescriptive guidance for establishing a secure configuration posture for Microsoft Exchange Server 2016” and “was tested against Microsoft Exchange Server 2016.”

Windows Server 2012 R2 (IIS 8.5)

1. Open the Internet Information Services (IIS) Manager console, and click your server. Then click HTTP Response Headers in the IIS section of the middle pane:

2. Click Add in the Actions pane on the right, enter the following values in the Add Custom HTTP Response Header dialogue window, then click OK:

  • Name: Strict-Transport-Security
  • Value: max-age=31536000

3. The newly added Custom HTTP Response Header will be added to the list of configured HTTP Response Headers in the middle pane:

4. The new setting will become effective immediately. You don’t have to iisreset your Exchange server.

Windows Server 2016 (IIS 10)

With IIS 10.0 version 1709 onwards Microsoft has implemented native HSTS support. Have a look at IIS 10.0 Version 1709 Native HSTS Support on how to configure HSTS in Windows Server 2016 version 1709+ via Powershell:

Import-Module IISAdministration
Reset-IISServerManager -Confirm:$false
Start-IISCommitDelay

$sitesCollection = Get-IISConfigSection -SectionPath "system.applicationHost/sites" | Get-IISConfigCollection
$siteElement = Get-IISConfigCollectionElement -ConfigCollection $sitesCollection -ConfigAttribute @{"name"="Contoso"}
$hstsElement = Get-IISConfigElement -ConfigElement $siteElement -ChildElementName "hsts"
Set-IISConfigAttributeValue -ConfigElement $hstsElement -AttributeName "enabled" -AttributeValue $true
Set-IISConfigAttributeValue -ConfigElement $hstsElement -AttributeName "max-age" -AttributeValue 31536000
Set-IISConfigAttributeValue -ConfigElement $hstsElement -AttributeName "redirectHttpToHttps" -AttributeValue $true

Stop-IISCommitDelay
Remove-Module IISAdministration

The new setting will become effective immediately. You don’t have to iisreset your Exchange server.

Verification

You can check whether HSTS has been successfully implemented by browsing to SSLLabs’ SSL Server Test page and enter the server’s corresponding hostname (in case it is publicly resolvable and directly reachable from the internet, which often is the case with SMBs):

That’s it! You have successfully enabled HSTS.

Further reading:

The post Microsoft Exchange 2016 and IIS 8.5+ – Enable HTTP Strict Transport Security (HSTS) appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Microsoft Exchange 2016 Migration – Outlook keeps prompting: “The Microsoft Exchange Administrator has made a change that requires you quit and restart Outlook”

$
0
0

While migrating from Microsoft Exchange 2010 to Exchange 2016 we came upon a typical issue in which Outlook keeps giving the message: “The Microsoft Exchange Administrator has made a change that requires you quit and restart Outlook“. 

This means that users were totally unable to connect to Exchange during the migration and co-existence phase:

The reason for that was quite obvious (at least after we did some research and thinking) as we switched the Client Access from the old Exchange 2010 servers to the newly implemented Exchange 2016 servers, i.e. all client connections have been directed to Exchange 2016 only. As no mailboxes have been migrated at this point in time client connections need then to be down proxied to Exchange 2010 by Exchange 2016. All things considered it could only have been a question of the protocol being used by our Outlook 2010 clients. Assuming that our Outlook 2010 clients try to connect via RPC-over-TCP (as is the default setting configured in all internal Outlook 2010 profiles) this is an expected behaviour as Exchange 2013 and later no longer support RPC-over-TCP connections from client applications. They only support RPC-over-HTTP, in later versions MAPI-over-HTTP (MAPIhttp). An automatic switch in protocol to RPC-over-HTTP does not occur, thus Outlook 2010 won’t find a valid connection end point and cannot be down proxied to Exchange 2010. The importance of this consideration is that if Outlook clients are using RPC-over-TCP prior to being migrated, it becomes even more important to have a process to update those profiles properly.

Required Outlook 2010 version:

In an Outlook profile under Exchange Proxy Settings you’ll find two options regarding fast and slow networks:

  • On fast networks, connect using HTTP first, then connect using TCP/IP
  • On slow networks, connect using HTTP first, then connect using TCP/IP

In order to understand what this setting really means have a look at the article Outlook Anywhere: Fast vs. Slow network connection over HTTP or TCP/IP:

Outlook Anywhere; Emphasis on “Anywhere”
First of all, lets take a closer look at the terms being used here:

  • TCP/IP connection
    This is the traditional (internal) direct-to-Exchange connection also known as a “RPC over TCP” connection or as a (not entirely technical correct) MAPI connection.
  • HTTP or HTTPS connection
    This is the over-the-Internet connection introduced in Outlook 2003 also known as a “RPC over HTTP” connection and nowadays knows as “Outlook Anywhere”.
    As of Outlook 2013 SP1 in combination with Exchange 2013 SP1, this is a MAPI over HTTP connection or simply: MAPI/HTTP.

The description for the HTTP connection doesn’t really hold true anymore as the HTTP connection can also be used internally. In fact, over the past few Exchange versions, the trend was to move away from the direct RCP connections and towards HTTP connections, even internally. Actually, as of Exchange 2013, all Outlook connectivity is taking place via Outlook Anywhere.

So, with having the option On fast networks, connect using HTTP first, then connect using TCP/IP being disabled we receive the aforementioned error:

With having the option On fast networks, connect using HTTP first, then connect using TCP/IP being enabled the connection can be established successfully:

Now, in order to rollout such a configuration change on a large scale (after careful testing of course!) you could utilize GPOs by either configuring the corresponding Outlook setting or by using GPP and manipulating the user’s registry settings directly.

The required Group Policy Setting can be found in the corresponding Microsoft Office ADMX Templates under:

  • User Configuration | Policies | Administrative Templates | Microsoft Outlook 2013 | Account Settings | Exchange
  • User Configuration| Policies | Administrative Templates | Microsoft Outlook 2016 | Account Settings | Exchange

Please keep in mind that this setting is unavailable with Office 2010 ADMX.

Alternatively you could add the setting manually via GPP (Group Policy Preferences) under:

  • User Configuration| Preferences | Windows Settings | Registry

This XML based code looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<Registry uid="{ADF10FBF-F683-4549-A19E-885D5A58D23E}" changed="2018-11-26 11:04:59" image="0" status="RPC" name="RPC" clsid="{9CD4B2F4-923D-47f5-A062-E897DD1DAD50}">
<Properties name="" value="" type="REG_SZ" key="Software\Policies\Microsoft\office\14.0\outlook\RPC" hive="HKEY_CURRENT_USER" default="0" displayDecimal="1" action="C"/>
</Registry>
<?xml version="1.0" encoding="UTF-8"?>
<Registry uid="{0C4FA337-45F4-4A16-965F-48155F45186F}" changed="2018-11-26 11:05:56" image="10" status="ProxyServerFlags" name="ProxyServerFlags" clsid="{9CD4B2F4-923D-47f5-A062-E897DD1DAD50}">
<Properties name="ProxyServerFlags" value="0000002F" type="REG_DWORD" key="Software\Policies\Microsoft\office\14.0\outlook\RPC" hive="HKEY_CURRENT_USER" default="0" displayDecimal="1" action="C"/>
</Registry>
Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Policies\Microsoft\office\14.0\outlook\rpc]
"ProxyServerFlags"=dword:0000002f

With having the option On fast networks, connect using HTTP first, then connect using TCP/IP being enabled by GPO or GPP, the settings are greyed out, cannot be changed by the user, and the connection can be established successfully:

The Outlook Connection Status now shows: RPC/HTTP

Prior to switching Client Access from Exchange 2010 to Exchange 2016 all Outlook 2010 clients connected via RPC-over-TCP, which shows as: RPC/TCP

With this configuration adjustment we were able to proceed with the Exchange 2016 migration without having any Outlook 2010 connectivity issues.

Further reading:

The post Microsoft Exchange 2016 Migration – Outlook keeps prompting: “The Microsoft Exchange Administrator has made a change that requires you quit and restart Outlook” appeared first on blog - Alexander Ollischer | Citrix | Microsoft.


Exchange 2016 – Migration of existing, custom Receive Connectors from Exchange 2010 to 2016 results in multiple errors

$
0
0

Another Exchange 2016 migration, another error while trying to migrate custom Receiver Connectors from Exchange 2010 to 2016 …  an analysis.

Courtesy of Shane Jackson it was easy to copy Receive Connectors from Exchange 2010 to Exchange 2013.  The following Powershell lines will help you with that:

# pipe old Receiver Connectors except Default as well as Client into an array
[array]$ReceiveConnectors = Get-ReceiveConnector -Server $OldServer | Where {$_.Name -notlike “Default $($OldServer)” -and $_.Name -notlike “Client $($OldServer)”}
# create new Receive Connectors - WhatIf
$ReceiveConnectors | foreach {
New-ReceiveConnector -Name $_.Name -RemoteIPRanges $_.RemoteIPRanges -bindings $_.Bindings -Banner $_.Banner -ChunkingEnabled $_.ChunkingEnabled -DefaultDomain $_.DefaultDomain -DeliveryStatusNotificationEnabled $_.DeliveryStatusNotificationEnabled -EightBitMimeEnabled $_.EightBitMimeEnabled -DomainSecureEnabled $_.DomainSecureEnabled -LongAddressesEnabled $_.LongAddressesEnabled -OrarEnabled $_.OrarEnabled -Comment $_.Comment -Enabled $_.Enabled -ConnectionTimeout $_.ConnectionTimeout -ConnectionInactivityTimeout $_.ConnectionInactivityTimeout -MessageRateLimit $_.MessageRateLimit -MaxInboundConnection $_.MaxInboundConnection -MaxInboundConnectionPerSource $_.MaxInboundConnectionPerSource -MaxInboundConnectionPercentagePerSource $_.MaxInboundConnectionPercentagePerSource -MaxHeaderSize $_.MaxHeaderSize -MaxHopCount $_.MaxHopCount -MaxLocalHopCount $_.MaxLocalHopCount -MaxLogonFailures $_.MaxLogonFailures -MaxMessageSize $_.MaxMessageSize -MaxProtocolErrors $_.MaxProtocolErrors -MaxRecipientsPerMessage $_.MaxRecipientsPerMessage -PermissionGroups $_.PermissionGroups -PipeliningEnabled $_.PipeLiningEnabled -ProtocolLoggingLevel $_.ProtocolLoggingLevel -RequireEHLODomain $_.RequireEHLODomain -RequireTLS $_.RequireTLS -EnableAuthGSSAPI $_.EnableAuthGSSAPI -ExtendedProtectionPolicy $_.ExtendedProtectionPolicy -SizeEnabled $_.SizeEnabled -TarpitInterval $_.TarpitInterval -Server $NewServer -WhatIf
}

But with Exchange 2016 those handy Powershell lines won’t help you anymore as executing those results in multiple errors:

1st error, due to multiple Receive Connectors listening on port 25:

The values that you specified for the Bindings and RemoteIPRanges parameters conflict with the settings on Receive connector “<Server>\Default Frontend
<Server>”. Receive connectors assigned to different Transport roles on a single server must listen on unique local IP address & port bindings.
+ CategoryInfo : InvalidOperation: (<Server>\Anonymous <Server>:ReceiveConnector) [New-ReceiveConnector], ReceiveConnectorRoleConflictExcept
ion
+ FullyQualifiedErrorId : [Server=<Server>,RequestId=b5f0fc88-538c-49a3-9a8f-9a279fc29e37,TimeStamp=17.01.2018 15:00:24] [FailureCategory=Cmdlet-Receiv
eConnectorRoleConflictException] 8B5C879,Microsoft.Exchange.Management.SystemConfigurationTasks.NewReceiveConnector
+ PSComputerName : <Server>

2nd error, which is sort of specific for my environment, as there were some Custom permissions in place on the Receive Connector being migrated from Exchange 2010 to Exchange 2016:

You can’t use Custom to specify PermissionGroups.
+ CategoryInfo : InvalidOperation: (<Server>\Relay <Server>:ReceiveConnector) [New-ReceiveConnector], CustomCannotBeS…GroupsException
+ FullyQualifiedErrorId : [Server=<Server>,RequestId=b5f0fc88-538c-49a3-9a8f-9a279fc29e37,TimeStamp=17.01.2018 15:00:24] [FailureCategory=Cmdlet-Custom
CannotBeSetForPermissionGroupsException] 22111502,Microsoft.Exchange.Management.SystemConfigurationTasks.NewReceiveConnector
+ PSComputerName : <Server>

Now let’s try the clarify those errors in get rid of them step by step:

1st error: 

Courtesy of Mark Gossa:
“If you want to create a new receive connector that listen on port 25, you can do this but you have to create them it using the Frontend Transport role if you have either an Exchange 2016 server or an Exchange 2013 server with both the CAS and MBX roles installed on the same server.”

This can be achieved by using the TransportRole switch with the -FrontendTransport parameter of the New-ReceiveConnector  cmdlet, e.g.:

New-Receiveconnector -Name "Custom Receive Connector" -RemoteIPRange (“10.10.61.176/32”) -TransportRole "FrontendTransport" -Bindings ("0.0.0.0:25") -usage "Custom" -Server $ENV:localhost

As soon as the custom Receive Connectors have been successfully added we can configure required custom permissions.

2nd error: What are Custom permissions in terms of Receive Connectors?

With Exchange Receive Connectors you can apply different permission settings. But as soon as you apply dedicated AD permissions for a certain user to a Receive Connector, i.e. 

  • ms-Exch-SMTP-Submit
  • ms-Exch-Bypass-Anti-Spam
  • ms-Exch-SMTP-Accept-Any-Recipient
  • et al

via the -ExtendedRights parameter of the Add-ADPermission cmdlet, the PermissionsGroup property will be set to Custom, e.g.

Get-ReceiveConnector "SERVER2016\Custom Receive Connector" | Add-ADPermission -User DOMAIN\SpecialUser -ExtendedRights ms-Exch-SMTP-Submit,ms-Exch-Bypass-Anti-Spam,ms-Exch-SMTP-Accept-Any-Recipient

In order to copy those custom AD permissions from an existing Receive Connector to a new Receive Connector destined for Exchange 2016, you first have to identify them by running:

Get-ReceiveConnector "SERVER2010\Custom Receive Connector" | Get-ADPermission | ? {$_.ExtendedRights} | ? {[string]$_.ExtendedRights -like "Ms-Exch*"} | select identity,user,extendedrights

The result might look something like this:

Relay permissions are an Active Directory permission and not an Exchange permission. Thus most of these settings are easy to identify and copy, except the ability of a Receive Connector to perform as an external relay which is configured using the ms-Exch-SMTP-Accept-Any-Recipient extended AD permission which is not so visible. In order to reduce the output of the aforementioned cmdlet the following EMS one-liner is useful. It helps to determine which receive connectors in the organization are open relay connectors so you can configure the new ones likewise:

Get-ReceiveConnector | Get-ADPermission | where {$_.identity -notlike "*Default*" -and $_.identity -notlike "*Client*" -and $_.user -like "NT AUTHORITY\*" -and $_.ExtendedRights -like "MS-Exch-SMTP-Accept-Any-Recipient"} | select identity, user, ExtendedRights

The result might look something like this where I identified two custom Receive Connectors for open relaying:

Custom permissions can also be viewed in the configuration portion of the AD schema using ADSIEdit. The Receive Connectors can be found under:

CN=Custom Receive Connector,CN=SMTP Receive Connectors,CN=Protocols,CN=SERVER2010,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=MYORG,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain,DC=local

Now you can add identified and required permissions to your custom Receive Connector, i.e. if you need to be able to send mail outside your organization you will need to add the MS-Exch-SMTP-Accept-Any-Recipient permission by executing:

Get-ReceiveConnector "SERVER2016\Custom Receive Connector" | Add-ADPermission -User 'NT AUTHORITY\Anonymous Logon' -ExtendedRights MS-Exch-SMTP-Accept-Any-Recipient

Further reading:

The post Exchange 2016 – Migration of existing, custom Receive Connectors from Exchange 2010 to 2016 results in multiple errors appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Exchange 2010 – Outlook 2010 OAB download failes with error 0x80190194 – A troubleshooting approach

$
0
0

Just recently I came across an interesting support case which involved an Exchange 2010 Offline Address Book (OAB) and Outlook 2010 clients trying to download it. The affected users received  an error stating: Error 0x80190194 – The operation failed. Problem is, this is a very common error when downloading the OAB and there are many server side problems which can generate this error:


This error can be caused by many a issues, e.g.

  • missing IIS and/or folder permissions (server-side)
  • IIS authentication issues (server-side)
  • IIS misconfigured http redirection (server-side)
  • file replication service issues (server-side)
  • missing or misconfigured OAB distribution settings (server-side)
  • OAB generating mailbox issues (server-side)
  • DAG replication issues and arbitration mailbox (server-side)
  • missing or misconfigured proxy settings (client-side)
  • download issues in terms of BITS (client-side)
  • et al

Troubleshooting methodology

In order to identify the affected OAB that my users tried to download I first had to get hands on the corresponding OAB GUID by running:

Get-OfflineAddressBook | ft Name,GUID

With the OAB GUID identified I started testing with a browser by navigating to the corresponding URL https://<ServerFQDN>/OAB/<GUID>/oab.xml, checking each server separately. I thereby received an Error 404 – Not Found on one of my servers, which in turn resulted in the aforementioned Outlook error 0x80190194 for my users. This error (which is basically a 404) appeared sporadically depending on the Exchange 2010 Server they were redirected to through our Load Balancer:

Further research on the server showing the Error 404 in particular showed that OAB with GUID 72189f79-62fa-4bcf-82ea-56dc45cfdeb0 is missing in IIS under the OAB node, thus assuming that this particular OAB has not been replicated between my Exchange 2010 Servers:

Missing OABs in the file system … check the Microsoft Exchange File Distribution Services (MSExchangeFDS) on the affected server

The following location on the Client Access Server can be checked to see if the OAB files have been replicated:

  • %ExchangeInstallPath%ClientAccess\OAB\<GUID>

Permissions should also be checked. If any of the default permissions are locked or are missing, the OAB files might not be replicated. Have a look at How to solve Internal Error 500 when testing connection to the OAB or Exchange 2010: OAB download error 80190194.

I copied the missing web.config file as well an restarted the MSExchangeFDS service, to no prevail. Then I executed Get-OfflineAddressBook “<OAB Name>” | Update-OfflineAddressBook to no prevail. Finally I execute iisreset to no prevail.

Well, checking the distribution methods for the affected OAB revealed that the 2nd Exchange 2010 Server was missing in the Distribution tab of the OAB’s Properties. I should have done this in the first place 🙂

After adding the missing Exchange 2010 Server and restarting the Microsoft Exchange File Distribution service by executing Restart-Service MSExchangeFDS the replication started immediately. The missing OAB showed up in IIS as well as in the folder structure.

Alternatively you could run the following cmdlet to initiate the OAB replication:

Update-FileDistributionService <ServerFQDN> -type oab

But even after a successful OAB replication I still received an http error 500 – Internal Server Error when acessing the OAB via browser in order to verify that everything’s fine:

So, back to the web.config file, that was presumably missing the first time around. What does a web.config file actually do when placed in the OAB file directory:

When you configure Http Redirection a web.config file is created in the OAB directory. This file has incorrect permissions. Assign Read and Read & Execute permission to Autheticated Users group then restart IIS using iisreset /noforce.
Now you can try to download the OAB using Outlook. It may be required to download it twice because sometimes the name of the OAB doesn’t appear at first try.

Well, that didn’t count for me so I checked the web.config and IIS settings to verify whether any settings have been adjusted in the past that I didn’t know of. As that wasn’t the case I deleted it, but the issue still persisted.

Verify OfflineAddressBook property on the affected user’s mailbox and adjust the missing OAB association:

Get-Mailbox <User>| fl Name,OfflineAddressBook
Get-Mailbox <User> | Set-Mailbox -OfflineAddressBook "<OAB Name>"

Further troubleshooting on the client-side:

With a corresponding Event Log entry on the client computer, Event ID 27, Source: Outlook, the issue can be further analyzed by following Microsoft’s KB843483 article:

Protokollname: Application
Quelle:        Outlook
Datum:         05.12.2018 12:28:31
Ereignis-ID:   27
Aufgabenkategorie:Keine
Ebene:         Informationen
Schlüsselwörter:Klassisch
Benutzer:      Nicht zutreffend
Computer:      <Computername>
Beschreibung:
Starting OAB download. (See event data).
Ereignis-XML:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="Outlook" />
    <EventID Qualifiers="16384">27</EventID>
    <Level>4</Level>
    <Task>0</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2018-12-05T11:28:31.000000000Z" />
    <EventRecordID>18716</EventRecordID>
    <Channel>Application</Channel>
    <Computer>Computername</Computer>
    <Security />
  </System>
  <EventData>
    <Data>Starting OAB download. (See event data).</Data>
    <Binary>0200000000000000730100007301000073010000000000000000000000000000E9FD00000000000000000000D05B00003C600000306A0000286B0000F8670000601B2E13520B0000000000005C0000002000000005000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005C0063006F006E0073006F0072007300660069006E0061006E007A002E006400650020002D00200041006C006C006500000031623662323737382D643765332D343732612D616234622D64356335343264343836333000</Binary>
  </EventData>
</Event>
Protokollname: Application
Quelle:        Outlook
Datum:         05.12.2018 12:28:45
Ereignis-ID:   27
Aufgabenkategorie:Keine
Ebene:         Warnung
Schlüsselwörter:Klassisch
Benutzer:      Nicht zutreffend
Computer:      <Computername>
Beschreibung:
OAB Download Failed. (Result code in event data).
Ereignis-XML:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="Outlook" />
    <EventID Qualifiers="16384">27</EventID>
    <Level>3</Level>
    <Task>0</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2018-12-05T11:28:45.000000000Z" />
    <EventRecordID>18717</EventRecordID>
    <Channel>Application</Channel>
    <Computer>Computername</Computer>
    <Security />
  </System>
  <EventData>
    <Data>OAB Download Failed. (Result code in event data).</Data>
    <Binary>0200000094011980730100007301000073010000000000000000000000000000E9FD00000000000000000000D05B00003C600000306A0000286B0000F8670000601B2E13520B0000000000005C000000200000000400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000</Binary>
  </EventData>
</Event>

In my case the error messages stated exactly what I tried to achieve:

  • Result Code 2, i.e. You forced a full .oab file download manually.

Furthermore I checked some other client-side settings and known issues that could cause an OAB download error, such as whether BITS has some kind of problem when trying to download the OAB. An error 0x80200049 is often caused from the BITS job list being full. To fix this, you must clear/reset the BITS job list. Microsoft outlook uses BITS to download the OAB and if the BITS queue goes full it simply stops:

bitsadmin /list /verbose

bitsadmin /reset

The client-side proxy settings, i.e. the client should have  unobstructed access to Exchange via https:

netsh winhttp show proxy

netsh winhttp reset proxy

netsh winhttp set proxy <proxy>:<port>

In the end the last thing I had to ensure and configure was to enable GlobalWebDistribution for all my Exchange 2010 OABs in order to prevent server-specific connections when clients try to download the OAB:

Get-OfflineAddressBook | Where {$_.ExchangeVersion.ExchangeBuild.Major -Eq 14} | Set-OfflineAddressBook -GlobalWebDistributionEnabled $True -VirtualDirectories $Null

In my particular case that did it: the issue was resolved and setting the VirtualDirectories property to $Null reverted my solution attempt that I suggested previously 🙂 And I can tell you why: because my environment consisted of Exchange 2010 and Exchange 2016 Servers due to being in the middle of a migration.

Update:

With Exchange 2013 onward the CAS role proxies the OAB download request to an appropriate Mailbox role server. The CAS role maintains a log of each request it handles in the log files, present in folder %ExchangeInstallPath%\Logging\HttpProxy\OAB\. These log files are an excellent tool to identify which mailbox server the CAS chose to serve the request. Download issues can be analyzed with log files found in %ExchangeInstall%\Logging\OABDownload. The OAB generation logs can be found in the \Logging\OABGeneratorLog folder.

Maybe one of the steps outlined above will help you, too, to get rid of issues with downloading OABs for good.

Further reading:

The post Exchange 2010 – Outlook 2010 OAB download failes with error 0x80190194 – A troubleshooting approach appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Citrix NetScaler ADC and ShareFile StorageZone Controller Setup –“The folder you are looking for could not be found” on ShareFile Web App while accessing Network shares

$
0
0

To help you set up NetScaler for ShareFile with on-premises storage zone controllers, an easy-to-use wizard is included in the GUI. The wizard prompts you for basic information about your StorageZones Controller environment and then generates a configuration that:

  • Load balances traffic across StorageZones Controllers
  • Provides user authentication for StorageZone Connectors
  • Validates URI signatures for ShareFile uploads and downloads
  • Terminates SSL connections at the NetScaler appliance

The diagram (courtesy of © Citrix Systems) shows these Netscaler components created by the configuration:

After running the built-in Setup Citrix ADC for ShareFile wizard, users starting complaining that they cannot access their network shares anymore. Those network shares have been provided via the integrated ShareFile Connector’s funcionality and has been up and running very smoothly prior to adding Netscaler to the equation.

As soon as users tried to access a network share via a StorageZone Connector they received an error indicating “Failed to load folder -The folder you are looking for could not be found”:

As per Citrix, you may see following error while accessing Network Shares on ShareFile Web App:

The folder you are looking for could not be found. This can occur if the link you used is incorrect, or if it points to a folder that has been deleted or to which you do not have access.

Mind that this happened with the ShareFile Web App only while accessing network shares, i.e. using within a browser, whereas accessing the exact same network share via Citrix Files for iOS worked like a charm. After having consulted our change management documentation, it quickly became clear that only one culprit could remain, as there has been no change in user passwords, permissions, group memberships, UNC paths regarding the underlying network shares, et al. All users have required permissions. The root cause could only be traced back to the changes in the Netscaler configuration a couple of days earlier. So I started investigating the details of the Setup Citrix ADC for ShareFile wizard, its configuration changes and effects on my setup by reading ShareFile On-prem and NetScaler: A Comprehensive Configuration Guide & Deep Dive, amongst others:

In case you messed up with your ShareFile Configuration: you can try to remove it with Remove ShareFile Configuration

With having a better understanding of all things Netscaler & Sharefile, I did a little research and found a Citrix Discussion dealing with my issue and it turned out that the Setup Citrix ADC for ShareFile wizard (to the contrary) does not handle all the configuration required to access network shares via a StorageZone connector. Further configuration as to be done manually to make it work (again), as can be read here:

To support restricted zones or web access to StorageZone Connectors, you must perform additional NetScaler configuration after you complete the NetScaler for ShareFile wizard.

The additional configuration provides the Netscaler components shown in the following diagram:

The description of the additional configuration of Netscaler in Citrix Docs is – to say the least – not very accurate. Without appropriate formatting of the corresponding text passages and additional depictions illustrating every single configuration step, manual adjustments are difficult to comprehend for non-Netscaler-aficionados. Therefore I’d like to expand on Citrix Docs and provide a more elaborate description of the configuration steps required. So, what do we need to add to the existing Netscaler configuration:

  1. a third NetScaler load-balancing virtual server
  2. a third CS policy to allow anonymous access from clients for the HTTP OPTIONS verb
  3. update the existing CS policy used for traffic to StorageZone Connectors (by default: _SF_CIF_SP_CSPOL)
  4. update the existing CS policy used for traffic to StorageZones for ShareFile Data (by default: _SF_SZ_CSPOL)
  5. create a heartbeat monitor for the StorageZones Controller service and bind it to the CS virtual server for ShareFile
  6. verify the ShareFile Load Balancing configuration

First, add a new Load Balancing vServer as follows:

add lb vserver vsrv_SF_ZONE_OPTION SSL 0.0.0.0 0 -persistenceType NONE -cltTimeout 180	
bind lb vserver vsrv_SF_ZONE_OPTION 	
set ssl vserver vsrv_SF_ZONE_OPTION -sslProfile ns_default_ssl_profile_frontend	
bind ssl vserver vsrv_SF_ZONE_OPTION -certkeyName 	
add cs policy _SF_ZONE_OPTIONS_CSPOL -rule "HTTP.REQ.METHOD.EQ(\"OPTIONS\")"
Load Balancing vServer settings
bind the corresponding Sharefile Service and certificate

The full policy expression for the newly created CS policy (by default: _SF_ZONE_OPTIONS_CSPOL) should be as follows:

HTTP.REQ.METHOD.EQ("OPTIONS")

Adjust the existing _SF_CIF_SP_CSPOL policy in terms of Expression. The full policy expression for an existing _SF_CIF_SP_CSPOL should be as follows:

HTTP.REQ.URL.CONTAINS("/cifs/") || HTTP.REQ.URL.CONTAINS("/sp/") || HTTP.REQ.URL.CONTAINS("/ProxyService/")
_SF_CIF_SP_CSPOL Policy Expression

Adjust the existing _SF_SZ_CSPOL policy in terms of Expression. The full policy expression for an existing _SF_SZ_CSPOL should be as follows:

HTTP.REQ.URL.CONTAINS("/cifs/").NOT && HTTP.REQ.URL.CONTAINS("/sp/“).NOT && HTTP.REQ.URL.CONTAINS("/ProxyService/").NOT
_SF_SZ_CSPOL Policy Expression

In the end your CS policies should look like this (in terms of Expressions):

final CS policies and corresponding Expressions

Now adjust the existing CS vServer for Sharefile (in my case vsrv_SF_CS_ShareFile) regarding its Policy Bindings in that you add the newly created CS Policy (in my case _SF_ZONE_OPTIONS_CSPOL) as the third CS policy with a Priority of 90 and set the Target Load Balancing Virtual Server to the newly created LB vServer (in my case vsrv_SF_ZONE_OPTION):

The StorageZone Hearbeat Monitor can be added using CLI by running the following commands:

add lb monitor SZC_Heartbeat HTTP-ECV -send "GET /heartbeat.aspx" -recv "***ONLINE***” -secure YES
bind service <Name of your LB service i.e. internal SF server> -monitorName SZC_Heartbeat

The newly added StorageZone Heartbeat Monitor should look as follows:

StorageZone Heartbeat Monitor Configuration

The complete CLI command list would look like this, whereas the following values need replacement according to your environment:

  • CertDisplayName (server certificate name you want to bind to your vServer)
  • NameOfYourSFServer (i.e. the Sharefile Server you’ve added to your NetScaler configuration)
  • NameOfYourSFCSvServer (i.e. the Sharefile Content Switching vServer)
add lb vserver vsrv_SF_ZONE_OPTION SSL 0.0.0.0 0 -persistenceType NONE -cltTimeout 180	
bind lb vserver vsrv_SF_ZONE_OPTION 	
set ssl vserver vsrv_SF_ZONE_OPTION -sslProfile ns_default_ssl_profile_frontend	
bind ssl vserver vsrv_SF_ZONE_OPTION -certkeyName 	
add cs policy _SF_ZONE_OPTIONS_CSPOL -rule "HTTP.REQ.METHOD.EQ(\"OPTIONS\")"	
add cs policy _SF_SZ_CSPOL -rule "HTTP.REQ.URL.CONTAINS(\"/cifs/\").NOT && HTTP.REQ.URL.CONTAINS(\"/sp/\").NOT && HTTP.REQ.URL.CONTAINS(\"/ProxyService/\").NOT"
add cs policy _SF_CIF_SP_CSPOL -rule "HTTP.REQ.URL.CONTAINS(\"/cifs/\") || HTTP.REQ.URL.CONTAINS(\"/sp/\") || HTTP.REQ.URL.CONTAINS(\"/ProxyService/\")"
bind cs vserver  -policyName _SF_ZONE_OPTIONS_CSPOL -targetLBVserver vsrv_SF_ZONE_OPTION -priority 90

Finally, go to Traffic Management > Load Balancing > Virtual Servers to view the status of the load balancing virtual servers created for ShareFile. It may look similar to my configuration:

Virtual Servers configured for ShareFile Load Balancing

While testing your new configuration and accessing network shares via ShareFile connectors you should see an increasing hit number in the Hits column of your corresponding CS policies:

Increasing hit number in the Hits column

Update
You have to consider network restrictions as well, as Security can mess with ShareFile traffic and network flow, especially when Firewall settings do not allow corresponding (read: whitelisted) traffic to ShareFile domains, endpoints, and IPs, i.e. the ShareFile Control Plane IP ranges. Have a look at CTX208318 and CTX234446.

In another case, if you attempt to access the ShareFile network share and it prompts for users credentials, the ShareFile Web App credentials may not work. Have a look at CTX233739 as well:

Solution
Authentication settings of an IIS CIFS server on StorageZone Controller needs correction. Please follow the steps to resolve the issue:

1. Log onto the StorageZone Controller(s) and open IIS.
2. Expand Default web site
3. Click on the CIFS virtual directory, then on Authentication.
4. Ensure Anonymous is Enabled
5. ASP .NET Impersonation is Disabled
6. Basic Authentication is Enabled
7. Forms Authentication is Disabled
8. Windows Authentication is Disabled

Reference: The Authentication settings of an IIS CIFS server

Further reading:

The post Citrix NetScaler ADC and ShareFile StorageZone Controller Setup – “The folder you are looking for could not be found” on ShareFile Web App while accessing Network shares appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Citrix License Server v12.x – How to renew an expired Apache Tomcat Server Certificate

$
0
0

Just recently I came across an expired Server Certificate on my Citrix License Server v12.x. As everybody might know, the Citrix License Server is based on an Apache Tomcat webserver running on your Windows Server. During installation a self-signed server certificate is being issued and bound to the Apache’s web server port 8083. So how are you supposed to renew the server certificate in case it has expired and you need secure access to the corresponding Citrix License Server features?

Usually you access the Citrix License Administration Console through the URL

  • http://localhost:8082 or
  • http://<FQDNofYourLicenseServer>:8082

As opposed to the unsecure connection you can access the Citrix Simple License Service through an secure connection via

  • https://<FQDNofYourLicenseServer>:8083

thus requiring a corresponding server certificate reflecting the License Server’s FQDN on the certificate.

As I use a PowerShell script for daily reports on my current license consumption and usage (which can be found here), I need secure access to the License Server’s service. The script uses the Citrix.Licensing.Admin.V1 snapin and the included cmdlets Get-LicCertificate.ps1 as well as Get-LicInventory.ps1 to retrieve all information required for this purpose. The PowerShell script
Get-CitrixLicenses.ps1 can be found here, courtesy of Clint McGuire‘s efforts.

The snapin Citrix.Licensing.Admin.V1 has to be installed manually and can be found in LicensingAdmin_PowerShellSnapIn_x64.msi under the \x64\Licensing folder of your XenDesktop 7.x installation medium:

license_server_01

After the server certificate has expired the script stopped working. As soon as the script issued the cmdlet Get-LicInventory -AdminAddress https://<FQDNofYourLicenseServer>:8083/ I received an error stating:

Get-LicInventory : CertificateVerificationFailed

license_server_02

Keep in mind that the exact same error shows up in case you don’t trust the server certifcate’s issuer, which is usually the case in terms of a default installation.

Interlude

This error pointed me in another direction as well. In order to avoid any certificate related errors while running PowerShell cmdlets against the Citrix License Server’s secure address, you have to read the CertHash property first, then provide it to the Get-LicInventory cmdlet, e.g.:

$ServerAddress = "https://<FQDNofYourLicenseServer:8083"
$certhash = (Get-LicCertificate -AdminAddress $ServerAddress).certhash
$LicAll = Get-LicInventory -AdminAddress $ServerAddress -CertHash $certhash

Then you should receive the corresponding License Server information, e.g.:

license_server_03


LicenseProductName : XDT
LocalizedLicenseProductName : Citrix XenDesktop Platinum
LicenseEdition : PLT
LicenseLocalizedEdition :
LicenseSubscriptionAdvantageDate : 2014:0826
LicenseType : Retail
LocalizedLicenseType : Retail
LicensesInUse : 44
LicensesAvailable : 44
LicenseOverdraft : 4
LicenseModel : UD
LocalizedLicenseModel : User/Device

LicenseProductName : XDT
LocalizedLicenseProductName : Citrix XenDesktop Platinum
LicenseEdition : PLT
LicenseLocalizedEdition :
LicenseSubscriptionAdvantageDate : 2016:0806
LicenseType : Retail
LocalizedLicenseType : Retail
LicensesInUse : 16
LicensesAvailable : 55
LicenseOverdraft : 5
LicenseModel : UD
LocalizedLicenseModel : User/Device

Ok then, back to the actual renewal of the server certifcate….. My methodology looked as follows:

  1. create a new server certificate including a private key
  2. export the server certificate to pfx file format
  3. extract server certificate and private key from pfx to separate files
  4. exchange the Apache Tomcat’s expired server certificate with newly created certificate

1. Request a new Server Authentication Certificate

As I sported a Microsoft PKI for internal use server certificates, I simply requested a new Server Authentication Certificate for my Citrix License Server on my Microsoft CA. You only have to ensure that you choose the correct certificate and mark the corresponding key as exportable. The Name should reflect the License Server’s FQDN:

license_server_04

Issue the newly requested server certificate on your CA, then import/install it onto your Citrix License Server:

license_server_05

license_server_06

license_server_07

2. Export the Server Certificate inluding Private Key

After importing you need to export the server certificate including the private key, i.e. to the pfx file format. To achieve this simply open a MMC an add the Certificates Snap-in pointing to your Certificates – Current User store. There you should find the recently installed server certificate. Right-click the corresponding certificate and hit All Tasks | Export…:

license_server_08

license_server_09

Now rinse and repeat as we need to export the server certificate once more, this time around in cer file format, i.e. without your private key and Base-64 encoded X.509 (.CER):

license_server_10

license_server_11

3. Extract Private Key and Server Certificate from PFX

In order to achieve this I utilized GnuWin32’s OpenSSL binaries for Windows. You can download them here. I downloaded and installed the Complete package, except sources. After installation you’ll find the required openssl.exe executable file in C:\Program Files (x86)\GnuWin32\bin. In order to extract both the certificate as well as the certificate’s key issue the following commands in an elevated command prompt:

openssl pkcs12 -in <ExportedCertificate>.pfx -nocerts -out key.pem -nodes
openssl rsa -in key.pem -out server.key

After issueing these commands you should at least have two files available:

  • key.pem
  • server.key

4. Import newly created server certificate to Apache Tomcat

The previously exported server.key certificate file can now be used to replace the expired Apache server certificate located at C:\Program Files (x86)\Citrix\Licensing\LS\conf … or … C:\Program Files (x86)\Citrix\Licensing\WebServicesForLicensing\Apache\conf … 

Subsequently you have to verify or adjust the configuration file httpd-ssl.conf which can be found in C:\Program Files (x86)\Citrix\Licensing\WebServicesForLicensing\Apache\conf\extra 

SSLCertificateFile "C:/program files (x86)/citrix/Licensing/WebServicesForLicensing/Apache/conf/server.crt"
SSLCertificateKeyFile "C:/program files (x86)/citrix/Licensing/WebServicesForLicensing/Apache/conf/server.key"

As a final step restart the corresponding Citrix Licensing services.

Navigate to URL https:<FQDNofCitrixLicenseServer>:8083 and verify whether any certificate errors appear:

5. Script

The final script looks as follows:

<#
.SYNOPSIS
Reports on Citrix licenses in use for selected products.
 
.DESCRIPTION
This script will query your Citrix Licnese server and output the in
use and total licenses for individual products.
 
.NOTES
Requires Citrix Licensing Snapin (Citrix.Licensing.Admin.V1)
If your License server has a self-signed cert you may get license
errors when running this.  I've resolved this in my test environments
by installing the cert as a Trusted Root CA Cert.
 
Source: http://www.clintmcguire.com/scripts/get-citrixlicenses/
Author: Clint McGuire
Version 1.0
Copyright 2013

Code edited by Alexander Ollischer
Source: https://blog.ollischer.com
Version 1.1
Copyright 2017
 
.EXAMPLES
PS> .\Get-CitrixLicenses.ps1
Using 71 Citrix XenApp Enterprise of 132 available.
 
#>
############################################################################################
#DEFINE THESE VARIABLES FOR YOUR ENVIRONMENT
 
#Enter the URL for your License server, typically this uses HTTPS and port 8083
#E.G. "https://licensingservername:8083"
$ServerAddress = "https://<LicenseServerName>.domain.local:8083"
 
#Enter the license type you would like to output, this can be a comma separated list, include 
#each option in single quotes
#E.G. 'Citrix XenApp Enterprise','Citrix XenDesktop Enterprise','Citrix EdgeSight for XenApp'
$LicenseTypeOutput = 'Citrix XenApp Enterprise','Citrix XenDesktop Enterprise','Citrix EdgeSight for XenApp','Citrix XenApp Platinum','Citrix XenDesktop Platinum'

############################################################################################
 
#Path to HTMLReport File
$FilePath = "C:\Scripts\LMC\LicAll.html"
$smtpsettings = @{
	To =  "recipient@domain.local"
	From = "sender@domain.local"
    Subject = "Citrix License Usage"
	SmtpServer = "IPorFQDNofYourMailServer"
	}
     
#Check for Licensing Snap-in, add if not currently added
#PowerShell Snap-in is contained in: LicensingAdmin_PowerShellSnapIn_x64.msi
if ( (Get-PSSnapin -Name Citrix.Licensing.Admin.V1 -ErrorAction SilentlyContinue) -eq $null )
{
    Add-PsSnapin Citrix.Licensing.Admin.V1
}
 
#Create Hash tables
$LicAll = @{}
$LicInUse = @{}
$LicAvailable = @{}
 
#Build License Display hash table 
$certhash = (Get-LicCertificate -adminaddress $ServerAddress).certhash
$LicAll = Get-LicInventory -AdminAddress $ServerAddress -CertHash $certhash
Get-LicInventory -AdminAddress $ServerAddress -CertHash $certhash | ConvertTo-HTML | Out-File $FilePath

ForEach ($LicInfo in $LicAll) 
{
    $Prod = $LicInfo.LocalizedLicenseProductName
    $InUse = $LicInfo.LicensesInUse
    $Avail = $LicInfo.LicensesAvailable
    if ($LicInUse.ContainsKey($Prod))
        {
                if ($LicInUse.Get_Item($Prod) -le $InUse) 
                {
                    $LicInUse.Set_Item($Prod, $InUse)
                }
        }
    else
        {
            $LicInUse.add($Prod, $InUse)
        }
    if ($LicAvailable.ContainsKey($Prod))
        {
                if ($LicAvailable.Get_Item($Prod) -le $Avail) 
                {
                    $LicAvailable.Set_Item($Prod, $Avail)
                }
        }
    else
        {
            $LicAvailable.add($Prod, $Avail)
        }
}
 
#Output license usage for each requested type.
Foreach ($Type in $LicenseTypeOutput)
{
    $OutPutLicInUse = $LicInUse.Get_Item($Type)
    $OutPutAvail = $LicAvailable.Get_Item($Type)
    Write-Host "Using" $OutPutLicInUse  $Type  "of" $OutPutAvail "available."
}

#Send Email Notification
$Output = (Get-Content $FilePath | Out-String)
Send-MailMessage @smtpsettings -Body $Output -BodyAsHtml -Encoding ([System.Text.Encoding]::UTF8)

 

Further reading:

The post Citrix License Server v12.x – How to renew an expired Apache Tomcat Server Certificate appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Windows Server 2016 ADFS v4.0 – The specified service account ‘CN=svc-ADFS-gMSA’ did not exist. Attempt to create the group Managed Service Account failed. Error: There is no such object on the server.

$
0
0

While running the Active Directory Federation Services Configuration Wizard for the first time on a newly installed Windows Server 2016, I ran into the following error after deciding to create the first federation server in a federation server farm, and creating a Group Managed Service Account (gMSA) as Service Account for my ADFS implementation:

The specified service account ‘CN=svc-ADFS-gMSA’ did not exist. Attempt to create the group Managed Service Account failed. Error: There is no such object on the server.

During troubleshooting it turned out that the underlying issue actually lay far deeper than expected…

At first all prerequisites checks passed successfully:

Prior to running the Configuration Wizard I succesfully created the Key Distribution Services KDS Root Key for the gMSA by executing:

Add-KdsRootKey -EffectiveTime (Get-Date).AddHours(-10)

What did I miss? Well, silly me, I simply did not provision the gMSA prior to running the Configuration Wizard, though the Configuration Wizard states, that it will create a gMSA if it does not already exist:

So, how do I successfully create a corresponding gMSA in order to make it work with AD FS?

You can provision a gMSA using the New-ADServiceAccount cmdlet, where domain.com ist the your own TLD:

New-ADServiceAccount -Name svc-ADFS-gMSA -DNSHostName sts.domain.com -KerberosEncryptionType RC4, AES128, AES256 -PrincipalsAllowedToRetrieveManagedPassword serv-adfs$ -ServicePrincipalNames http/sts.domain.com/domain.com, http/sts.domain.com/domain, http/sts/domaim.com, http/sts/domain -Path "CN=Users,DC=DOMAIN,DC=local"

Verify whether the gMSA has been successfully provisioned in the corresponding OU provided with the -Path parameter:

Verify whether all required Service Principal Names (SPN) have been registered and associated with the newly provisioned gMSA by executing the following command:

setspn /l svc-ADFS-gMSA

After running the Configuration Wizard again it still fails with an error stating:

The system cannot find the file specified

Doing a Little Research resulted in “the GMSA was moved from the Managed Service Accounts container in Active Directory” and “make sure you have a Managed service account group object in ADUC. (not an OU)”. Checking my Active Directory I realized that the required container Managed Service Accounts is actually missing! And I was unable to identify the reason why it was missing in the first place. Looking into my Deleted Objects container came up … empty.

So, how do I successfully re-create a corresponding Managed Service Accounts container in order to make it work with AD FS?

Further researching revealed I had to re-create a corresponding container with ADSIEdit named “Managed Service Accounts” and ensure ist security properties are correctly set to “Enable Inheritance”.

Then I deleted my previously provisioned gMSA:

After that I ran the AD FS Configuration Wizard again and it was completed successfully:

Checking the AD FS Service as well as the corresponding Event Log showed that all went well:

Further reading:

The post Windows Server 2016 ADFS v4.0 – The specified service account ‘CN=svc-ADFS-gMSA’ did not exist. Attempt to create the group Managed Service Account failed. Error: There is no such object on the server. appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Windows Server 2016 ADFS v4.0 – Certain (non-admin) Users Cannot Login – no error, just plain login mask

$
0
0

Just recently I happened to face a pretty annoying issue, that took me a couple of hours to solve. And by the end it was quite simple and obvious actually.

Long story short – newly implemented AD FS 4.0 farm, properly configured and already working. No funny claims rules, no additional authentication factors enforced, still login was working only for a couple of accounts with no obvious similarities or discrepancies. All users were created equal.

Testing was done by navigating to https://sts.domain.com/adfs/ls/idpinitiatedSignon.aspx and logging in with valid user credentials. For some users it worked and for others… well it did not. They were just thrown back to the login mask, no error message or anything.

Please note that the aforementioned endpoint is by default disabled in Windows Server 2016 ADFS 4.0. You have to enable it first by running:

Set-AdfsProperties -EnableIdPInitiatedSignonPage $true

Troubleshooting was quite a chore as no events were triggered and or logged in any of the available Event Logs or log/tracing files. When using bogus login credentials on purpose, e.g. wrong username or password, I received corresponding error messages of invalid username or password.

I use a managed service account (gMSA) for the ADFS service, as this is best practice and recommend by Microsoft.

As stated by Vasil Michev in his blog post (and answer to my problem):

Turns out, the service account was missing the required permissions. Simply giving the account Read access to the user account in question resolved the issue – the user was now able to properly use AD FS. As it turns out, the root cause was that the for whatever reason, the access entry for “Authenticated Users” was removed from the “Pre-Windows 2000 Compatible Access” group in the affected environments.

and

In addition, a very similar issue might occur if the application in question is not able to read the tokenGroupsGlobalAndUniversal attribute. To solve this, make sure that the service account is a member of the “Windows Authorization Access Group”.

Furthermore as stated by Aaron.H in the corresponding Microsoft Forums thread:

I discovered that the Pre-Windows 2000 Compatible Access group in my production domain did not have Authenticated Users as a member like the new lab domain did. Once I added Authenticated Users to the Pre-Windows 2000 group, I was able to authenticate using regular domain accounts to ADFS.

Another reply states:

I just added the service account (which was created manually rather than a managed service account) for adfs to the Pre-Windows 2000 group and that resolved my issue, so you don’t have to add the Authenticated Users group if you don’t want to.

And a final statement on this issue:

Don’t know if you guys solved it by now but i wanted to let you know the Pre-windows 2000 Group membership didn’t work for me. However when i added the ADFS group managed service account to the ‘ Windows Authorization Access Group’  it worked instantly. According to the description of this group (‘Members of this group have access to the computed tokenGroupsGlobalAndUniversal attribute on User objects’) this is expected behaviour.

There are a lot of reasons why correct permissions are not being assigned throughout the domain. The adminCount attribute, disabling inheritance, et al. So thanks to all guys involved in this issue and for clarifying things!

Further reading

The post Windows Server 2016 ADFS v4.0 – Certain (non-admin) Users Cannot Login – no error, just plain login mask appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Citrix XenDeskop 7.x – Printer auto-creation failure. Reason: AddPrinter() failed with status 0x7A

$
0
0

Today I came across an issue where printer auto creation failed for a couple of HP printers on one client computer only. The XenDesktop worker showed an Event ID 1106 stating Printer auto-creation failure. Reason: AddPrinter() failed with status 0x7A.

Searching for this particular error didn’t reveal anything helpful. So I had to start digging into it by myself.

Converting hex 7A into decimal 122 and using net helpmsg 122 does provide some additional information:

The data area passed to a system call is too small.

Searching for this error message revelead that there is an issue with the printer or the driver associated with it on the client. It looks like the printer drivers were not installed properly in the users PC. Re-installing the driver on the workstation solved the issue.

The post Citrix XenDeskop 7.x – Printer auto-creation failure. Reason: AddPrinter() failed with status 0x7A appeared first on blog - Alexander Ollischer | Citrix | Microsoft.


Microsoft Exchange 2016 – 454 4.7.5 The certificate specified in TlsCertificateName of the SendConnector could not be found

$
0
0

If the emails remain on the Exchange server and cannot be forwarded to the smarthost for sending, it may be because the certificate bound to the corresponding connector no longer exists or has been expired. Of course, it is also possible that the expected subject alternate name (SAN) is missing or incorrect. In that case you may receive an error stating:

454 4.7.5 The certificate specified in TlsCertificateName of the SendConnector could not be found

You can verify whether you have such an issue by checking the mail queue:

Get-Queue

In case you have a lot of mails stuck in one of your mail queues you can further investigate the affected queue by running:

Get-Queue <queue name>

e.g. Get-Queue "SERV-MAIL\3| fl

Having a look at the LastError property reveals the aforementioned error.

In my case the outbound Office 365 Send Connector was involved. In order to fix this I had to issue the following commands:

$TLSCert = Get-ExchangeCertificate -Thumbprint <thumbprint of valid certificate>
$TLSCertName = “$($TLSCert.Issuer)$($TLSCert.Subject)”
Get-SendConnector -identity “<send connector name>” | Set-SendConnector -TlsCertificateName $TLSCertName
Restart-Service MSExchangeTransport

You have to replace the thumbprint accordingly, i.e. matching your own certificate’s thumbprint.

Run Get-ExchangeCertificate cmdlet

The procedure would be the same for all other Send Connectors or Receive Connectors.

By the time you go back to Queue viewer the queues should have started to empty.

Further reading:

https://docs.microsoft.com/en-us/exchange/troubleshoot/email-delivery/mails-issues-occur-after-april-15-2016

https://www.petenetlive.com/KB/Article/0001631

The post Microsoft Exchange 2016 – 454 4.7.5 The certificate specified in TlsCertificateName of the SendConnector could not be found appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Microsoft ADFS 3.0 – Event ID 364 – No strong authentication method found for the request from

$
0
0

After upgrading the MFA component on our ADFS server it stopped working. Further investigation showed the following event ID error:

Encountered error during federation passive request.

Additional Data

Protocol Name:
Saml

Relying Party:
https://adfs.domain.com/saml/info

Exception details:
Microsoft.IdentityServer.Web.NoValidStrongAuthenticationMethodException: No strong authentication method found for the request from https://adfs.domain.com/saml/info.
at Microsoft.IdentityServer.Web.Authentication.AuthenticationPolicyEvaluator.EvaluatePolicy(Boolean& isLastStage, AuthenticationStage& currentStage, Boolean& strongAuthRequried)
at Microsoft.IdentityServer.Web.PassiveProtocolListener.GetAuthMethodsFromAuthPolicyRules(PassiveProtocolHandler protocolHandler, ProtocolContext protocolContext)
at Microsoft.IdentityServer.Web.PassiveProtocolListener.GetAuthenticationMethods(PassiveProtocolHandler protocolHandler, ProtocolContext protocolContext)
at Microsoft.IdentityServer.Web.PassiveProtocolListener.OnGetContext(WrappedHttpListenerContext context)

Event ID 364, Source: AD FS, Log Name: AD FS\Admin

The upgrade inadvertently disabled the Multi-factor Authentication Method in ADFS:

In order to make it work again I had to enable the aforementioned MFA component in ADFS Management | Authentication Methods | Multi-factor Authentication Methos even though it may not be actively used:

After that everything went back to normal.

Further reading:

https://social.msdn.microsoft.com/Forums/sqlserver/en-US/bce14a02-cc4f-42ee-a5e0-94559b2ca5c8/issues-with-azure-mfa-and-adfs?forum=windowsazureactiveauthentication

The post Microsoft ADFS 3.0 – Event ID 364 – No strong authentication method found for the request from <Relying Party> appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Microsoft Teams (Desktop App) Error Code CAAD0009

$
0
0

Recently my Microsoft Teams Desktop app started showing the error code caad0009 and I was unable to use the Teams app from thereon. This a rare sign-in error which indicates the service could not validate your credentials or recognize your device. Although it occurs quite rarely, this error primarily affects work and school accounts and it only occurs on the Windows Desktop app, i.e. you can still use the Teams browser version, Teams mobile app, or even the Teams Desktop app on a totally different Windows device. I searched quite some time to finally resolve this issue with a simple Powershell command. All other solutions didn’t work for me.

The exact error message states:

Error code – caad0009
There’s a more permanent way to sign in to Microsoft Teams. If you’re having trouble completing the process, talk to your IT admin

Other solutions suggested things like

  • clearing the Windows Credentials cache
  • clearing the Teams Desktop app cache
  • re-installing Teams Desktop app
  • running Teams Desktop app in compatibility mode (e.g. Windows 7)

Someone even suggested that “it was a corrupt MicroSD card I had inserted couple of days a go, which prevented MS Teams and OneDrive from starting or signing in.” Hilarious, really!

It came all down to WAM (web Account Manager) and ADAL (Azure Active Directory Authentication Library). By running this little Powershell command in an elevated Windows Powershell shell the Teams Desktop app started working again like a charm:

if (-not (Get-AppxPackage Microsoft.AAD.BrokerPlugin)) { Add-AppxPackage -Register "$env:windir\SystemApps\Microsoft.AAD.BrokerPlugin_cw5n1h2txyewy\Appxmanifest.xml" -DisableDevelopmentMode -ForceApplicationShutdown } Get-AppxPackage Microsoft.AAD.BrokerPlugin

In my case the command took almost an hour to complete and it simply stood there with a blinking cursor with nothing seemingly happening. I just kept it that way and all of a sudden it completed successfully. And after that Teams Desktop started working immediately without having to reboot my PC or anything.

Further reading:

The post Microsoft Teams (Desktop App) Error Code CAAD0009 appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Resolving Kerberos Authentication Issues with Entra ID Application Proxy and IIS Applications

$
0
0

Introduction

Integrating on-premises applications with cloud services can sometimes lead to unexpected authentication challenges. In this blog post, I'll explore a specific issue encountered when setting up Microsoft Entra ID Application Proxy (formerly Azure AD Application Proxy) to provide Single Sign-On (SSO) access to an internal IIS application using Kerberos Constrained Delegation (KCD). I'll delve into the problem, the troubleshooting steps taken, and the ultimate solution that resolved the authentication errors.

Scenario Overview

  • Application Proxy Connector Server: serv-app101
  • Backend IIS Application Server: serv-iis
  • Published Application URL: https://portal.domain.com
  • Authentication Method: Kerberos Constrained Delegation (KCD)
  • Users: Entra ID-synced Active Directory users

The goal was to allow Entra ID users to access the internal IIS application seamlessly via SSO, using the Application Proxy with KCD.

The Problem

After configuring the Application Proxy and publishing the internal application, users encountered the following issues:

  • Event ID 13019 in the Application Proxy Connector's event log:

  • Event ID 4769 in Security Event Log on corresponding Domain Controller:

  • User Access Error: "The user is not authorized to access the backend application."

Notably, users had been assigned to the Enterprise Application in Entra ID, and prior logins without Kerberos delegation worked seamlessly. The user credentials were confirmed to be correct, as users could successfully pre-authenticate to Entra ID.

Troubleshooting Steps

1. Verifying SPN Registration and Delegation Settings

I first ensured that the Service Principal Name (SPN) was correctly registered and that delegation settings were properly configured:

  • SPN Registration:
    • Verified that HTTP/portal.domain.com was registered on the serv-iis computer account, as IIS was running under the Network Service account.
    • Used the following command to confirm the SPN: setspn -L serv-iis
  • Delegation Settings:
    • Checked that serv-app101 was configured to trust for delegation to the HTTP/portal.domain.com SPN.

2. Checking for Duplicate SPNs

  • Ran the command to identify any duplicate SPNs: setspn -X
  • Confirmed that there were no duplicates causing conflicts.

3. Verifying IIS Authentication Settings

  • Ensured that Windows Authentication was enabled on the IIS application.
  • Confirmed that Negotiate was listed above NTLM in the Providers.

4. Testing Internal Access

  • Attempted to access https://portal.domain.com directly from within the internal network.
  • Verified that users could authenticate successfully without going through the Application Proxy.

5. Checking Network Connectivity and Time Synchronization

  • Confirmed that serv-app101 had network connectivity to serv-iis and the domain controllers on all ports required.
  • Ensured that all servers had synchronized clocks.

6. Reviewing Event Logs for Additional Clues

  • Enabled Kerberos logging to get detailed error messages.
  • Reviewed the Application and Security event logs on both serv-app101 and serv-iis.

Despite these efforts, the issue persisted. But I did find a couple of events that pointed me in the right direction. Especially Event ID 4769 in the Domain Controller's Security log which stated a Failure Code of 0x6. According to Microsoft the error code equals to KDC_ERR_C_PRINCIPAL_UNKNOWN, i.e. "Client not found in Kerberos database". And KDC_ERR_C_PRINCIPAL_UNKNOWN is actually an "Access is Denied" error for reading the TokenGroupsGlobalAndUniversal user-constructed attribute. By default, authenticated users have these permissions, whereas computer objects don't.

The Solution: Adding the Application Proxy Connector to the Windows Authorization Access Group

The breakthrough came with the realization that the Application Proxy Connector's computer account (serv-app101) lacked sufficient permissions to read certain user-constructed attributes required for Kerberos authentication.

Understanding the Root Cause

  • Kerberos Authentication Requirements:
    • Kerberos authentication requires the service requesting a ticket on behalf of a user to read the user's TokenGroupsGlobalAndUniversal attribute.
  • Permission Limitations:
    • By default, authenticated user accounts have permission to read this attribute.
    • Computer accounts, such as serv-app101, do not have this permission by default.

Granting Necessary Permissions

To resolve the issue, I added the serv-app101 computer account to the Windows Authorization Access Group, a built-in Active Directory group that grants permission to read the TokenGroupsGlobalAndUniversal user-constructed attribute.

Result

After adding serv-app101 to the Windows Authorization Access Group and allowing the changes to propagate, users were able to authenticate successfully via the Application Proxy with KCD enabled. The Event ID 13019 error ceased to appear in the event logs, confirming that the Application Proxy Connector could now retrieve Kerberos tickets on behalf of users.

Conclusion

When configuring Entra ID Application Proxy with Kerberos Constrained Delegation, it's crucial to ensure that the Application Proxy Connector's computer account has the necessary permissions to read user attributes required for Kerberos authentication.

Key Takeaways:

  • SPN Registration and Delegation Settings:
    • Always verify that SPNs are correctly registered and delegation settings are properly configured.
  • Permissions for Computer Accounts:
    • Be aware that computer accounts do not have the same default permissions as user accounts.
    • Grant necessary permissions (according to the least privilege principle) by adding the computer account to appropriate groups, such as the Windows Authorization Access Group.
  • Troubleshooting Approach:
    • Systematically check all aspects of the configuration, including network connectivity, authentication settings, and permissions.
    • Utilize event logs and enable verbose logging to uncover hidden issues.

By understanding the interplay between Active Directory permissions and Kerberos authentication requirements, administrators can effectively troubleshoot and resolve authentication issues in hybrid environments. Hope this helps.

Additional Resources

  • SPN Registration:
    • Verified that HTTP/portal.domain.com was registered on the serv-iis computer account, as IIS was running under the Network Service account.
    • Used the following command to confirm the SPN: setspn -L serv-iis
  • Delegation Settings:
    • Checked that serv-app101 was configured to trust for delegation to the HTTP/portal.domain.com SPN.

2. Checking for Duplicate SPNs

  • Ran the command to identify any duplicate SPNs: setspn -X
  • Confirmed that there were no duplicates causing conflicts.

3. Verifying IIS Authentication Settings

  • Ensured that Windows Authentication was enabled on the IIS application.
  • Confirmed that Negotiate was listed above NTLM in the Providers.

4. Testing Internal Access

  • Attempted to access https://portal.domain.com directly from within the internal network.
  • Verified that users could authenticate successfully without going through the Application Proxy.

5. Checking Network Connectivity and Time Synchronization

  • Confirmed that serv-app101 had network connectivity to serv-iis and the domain controllers on all ports required.
  • Ensured that all servers had synchronized clocks.

6. Reviewing Event Logs for Additional Clues

  • Enabled Kerberos logging to get detailed error messages.
  • Reviewed the Application and Security event logs on both serv-app101 and serv-iis.

Despite these efforts, the issue persisted. But I did find a couple of events that pointed me in the right direction. Especially Event ID 4769 in the Domain Controller's Security log which stated a Failure Code of 0x6. According to Microsoft the error code equals to KDC_ERR_C_PRINCIPAL_UNKNOWN, i.e. "Client not found in Kerberos database". And KDC_ERR_C_PRINCIPAL_UNKNOWN is actually an "Access is Denied" error for reading the TokenGroupsGlobalAndUniversal user-constructed attribute. By default, authenticated users have these permissions, whereas computer objects don't.

The Solution: Adding the Application Proxy Connector to the Windows Authorization Access Group

The breakthrough came with the realization that the Application Proxy Connector's computer account (serv-app101) lacked sufficient permissions to read certain user-constructed attributes required for Kerberos authentication.

Understanding the Root Cause

  • Kerberos Authentication Requirements:
    • Kerberos authentication requires the service requesting a ticket on behalf of a user to read the user's TokenGroupsGlobalAndUniversal attribute.
  • Permission Limitations:
    • By default, authenticated user accounts have permission to read this attribute.
    • Computer accounts, such as serv-app101, do not have this permission by default.

Granting Necessary Permissions

To resolve the issue, I added the serv-app101 computer account to the Windows Authorization Access Group, a built-in Active Directory group that grants permission to read the TokenGroupsGlobalAndUniversal user-constructed attribute.

Result

After adding serv-app101 to the Windows Authorization Access Group and allowing the changes to propagate, users were able to authenticate successfully via the Application Proxy with KCD enabled. The Event ID 13019 error ceased to appear in the event logs, confirming that the Application Proxy Connector could now retrieve Kerberos tickets on behalf of users.

Conclusion

When configuring Entra ID Application Proxy with Kerberos Constrained Delegation, it's crucial to ensure that the Application Proxy Connector's computer account has the necessary permissions to read user attributes required for Kerberos authentication.

Key Takeaways:

  • SPN Registration and Delegation Settings:
    • Always verify that SPNs are correctly registered and delegation settings are properly configured.
  • Permissions for Computer Accounts:
    • Be aware that computer accounts do not have the same default permissions as user accounts.
    • Grant necessary permissions (according to the least privilege principle) by adding the computer account to appropriate groups, such as the Windows Authorization Access Group.
  • Troubleshooting Approach:
    • Systematically check all aspects of the configuration, including network connectivity, authentication settings, and permissions.
    • Utilize event logs and enable verbose logging to uncover hidden issues.

By understanding the interplay between Active Directory permissions and Kerberos authentication requirements, administrators can effectively troubleshoot and resolve authentication issues in hybrid environments. Hope this helps.

Additional Resources

The post Resolving Kerberos Authentication Issues with Entra ID Application Proxy and IIS Applications appeared first on blog - Alexander Ollischer | Citrix | Microsoft.

Viewing all 59 articles
Browse latest View live