Quantcast
Channel: David Klein's Corner
Viewing all 56 articles
Browse latest View live

How can I easily have different app.config files for different build configurations in MSBUILD?

$
0
0
There are a few different proposed solutions I have seen suggested, some including recommendations involving 3rd party tools and Visual Studio Transformation Extensions. The following is the simplest method I have found. It involves a violation of the DRY principle (Don't Repeat Yourself) as the app.config files are duplicated (due to the lack of transforms) - but I considered this to be a small sacrifice based on the simplicity of the solution.

Steps:
1) Unload your Project file (csproj) file in Visual Studio and Right click and select "Edit {My Project Name}.csproj" on that same file to edit it.
2) Add a folder for each MSBUILD configuration that you want and put your different App.Config files in those subfolders e.g. \Release\App.config and \Debug\App.Config
2) Find the section in your csproj project file <None Include="App.config" /> and replace it with the following conditional Include statements for each of your MSBUILD configurations:
<None Condition="'$(Configuration)' == 'Debug'" Include="Debug\App.config" />
<None Condition="'$(Configuration)' == 'Release'" Include="Release\App.config" />

When you build, it will use either of the application configuration files that you have created and put them in the relevant build configuration output directory.

DDK


Microsoft Team Foundation Server 2013 Update - Error When Installing - "SQL Server Reporting Services is configured to require a secure connection."

$
0
0
You may get the following exception when upgrading to Team Foundation Server 2013 (TFS 2013) or update packages - when you are using self-signed certificates or your SQL Report server runs unsecured (not using SSL):

TF255455: SQL Server Reporting Services is configured to require a secure connection. However, no HTTPS URL is configured with a valid certificate. Use the Reporting Services Configuration Manager to configure or remove HTTPS support. For more information see http://go.microsoft.com/fwlink/?LinkId=179982


To resolve this issue (without getting valid certificates or securing your site), you can go to your Microsoft SQL Server Reporting Services (SSRS) folder and find the rsreportserver.config  file. This is typically in:

C:\Program Files\Microsoft SQL Server\MSRSXX.<ServerInstance>\Reporting Services\ReportServer\rsreportserver.config             

Update the "SecureConnectionLevel" from 2 to 0 - and you will be able to run the update. Change it back after the update has completed.



DDK

Impersonation of Web Users in ASP.NET/SharePoint 2013 without a password

$
0
0
There seemed to be a lack of samples available to demonstrate how Windows impersonation can be done within the context of a web application (such as SharePoint 2013 or ASP.NET). Most of the examples use the "LogonUser" Windows API call to get a user token. e.g. https://msdn.microsoft.com/en-us/library/chf6fbt4.aspx. However - that call requires a password to work. You don't really want all your user passwords to have to sit in a secure store to enable impersonation!

In my scenario, I had to write to a file through an existing COM Component via a .NET COM Interop library. It depended on the write operation being done from the context of a valid user - otherwise the file wouldn't be stamped correctly with author metadata.

To do this, I had to use an overload of the WindowsIdentity constructor which accepts a UPN (User Principal Name). From there, you can impersonate users within your code at will.

NOTE: the account that is doing the impersonation (e.g. svcSP) will need to have the "Act as Part of the Operating System"right as defined in your Local User Policy for this to work.

Code Sample:



void Main()
{
var userName = "LOCALDEV\\david.klein";
PrincipalContext ctx = new PrincipalContext(ContextType.Domain);

var user = UserPrincipal.FindByIdentity(ctx, userName);

if (user != null)
{
var upn = user.UserPrincipalName;
Debug.Print(upn);
WindowsIdentity id = new WindowsIdentity(upn);
WindowsImpersonationContext wic = id.Impersonate();
try
{
// Do what you need here under the impersonation context.
var currentId = WindowsIdentity.GetCurrent().Name;
Debug.Print(currentId);
}
finally
{
wic.Undo();
}
}
}

How to identify the application pool for a worker process (Windows Server 2012 and 2012 R2 using IIS 8)

$
0
0
Use the following command to determine which application pool maps to your w3wp.exe worker instance. This is particularly handy with SharePoint as it has 3 w3wp.exe processes running at any one time by default.
cd %windir%\system32\inetsrv
appcmd list wp

This will give you an output similar to the following:

C:\Windows\System32\inetsrv>appcmd list wp
WP "21192" (applicationPool:SharePoint - 80)
WP "12256" (applicationPool:SecurityTokenServiceApplicationPool)
WP "19972" (applicationPool:5f2e9be121504641ae144bcae3b8cf6e)
DDK

Fix - Error in lc.exe when Compiling Solution upgraded to Visual Studio 2017 RTM from Visual Studio 2015

$
0
0
Upgraded our product solution today to the latest Visual Studio 2017 RTM and everything seemed to work fine - until I started getting the following error in the build:

"The specified task executable "lc.exe" could not be run. The filename or extension is too long"

What is this lc.exe command and why is it running? It is used by the standard .NET licensing mechanism and is maintained by Visual Studio for information about all licensed components.

In my case, the error was occurring in "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets" Line 2975 according to my error log.

Clearly this problem was related to the fact we are using Telerik Controls which require a licx file to compile (or so I thought).

I turned on full diagnostics in Visual Studio 2017 to help get to the bottom of the issue:




This showed the full path that was being passed to lc.exe is over 42000 characters long (!):
1> C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6.1 Tools\lc.exe /target:ESSP.ApplicationPages.dll /complist:Properties\licenses.licx /outdir:obj\Debug_SP2013\ ....[SNIP]
 


Several places such as on Microsoft Connect - suggested that the "solution" (pardon the pun) is just to delete the licx file (also as per http://docs.telerik.com/devtools/aspnet-ajax/licensing/license-file) I could then recompile without any build exceptions.

This issue comes about because the lc.exe executable can only handle a parameter length of 32000 characters or less - and the full path is used for all references. Needless to say this is a restrictive limitation in the licensing mechanism!

So the possible alternatives to fix this issue:
1) Remove the licx file if possible when you don't need the full licensed functionality (in my case this was fine - as we don't need design mode for the Telerik controls).
2) Reduce the length of your references by adding a shared drive or logical redirect to a shorter path e.g. c:\references instead of c:\src\DDK\product name\releases\ etc)
3) Reduce the number of references that you have in the project that has issues with lc.exe

DDK

BOSE QuietComfort 35 - How to Tell You Have a Fake set of BOSE Headphones

$
0
0
I've had fake MicroSD cards sent to me previously when bought online - but the attention to detail in the fake BOSE QC35 headphones I recently received was amazing.

After ordering my Bose QuietComfort 35 noise-cancelling headphones off Ebay, I was surprised how slick they looked - but the positive impression did not last. Once I plugged them in and charged them up it became painfully apparent that the electronics inside them were not up to scratch:

1) They would cut out intermittently from the Bluetooth connection.
2) The noise cancellation was ineffective. I've used the QC25 headphones before and it is clear when the noise cancellation is turned on and off (thinking night and day). When noise cancellation is on, it reminds me of when people go deaf (in movies like "Saving Private Ryan" or "Dunkirk") from the concussive effects of a grenade or bomb (yes they're that good!).

I contacted Bose (Report_Counterfeits@bose.com) and they confirmed that they were in fact very well done fakes and they didn't exhibit most of the superficial faults that most fakes have. In particular, the "BOSE" writing on the headphones was embossed perfectly and there was no clear marks where the ear cups had been glued together. All the serial numbers on the headphones and on the box also matched. The box, plastic and packaging was also hard to fault.

The only issue was that the serial number was invalid - the number after the Z is meant to be a date.

S/N:072536Z08231568AE

So it seems that the most important (and most expensive components) - the internal chips and electronics - are the part you are probably paying for and the part that is hardest to reproduce accurately by the guys making the yum-cha/knock-off copies.

So the simplest way to check for a fake is to attempt to register you headphones online at:
https://www.bose.com.au/en_au/support/product_registration.html


The guy I was dealing with on Ebay even tried to negotiate the refund to 30% of the original price before I told him how it is meant to work. Make sure you demand a 100% refund including the return postage cost on a fake substandard item like this. The guys at BOSE will also help you to get a refund if needs be.

Hope this helps!
DDK

TypeScript - Importing jQuery TypeScript Definitions (d.ts) for Visual Studio 2017

$
0
0
TypeScript is building in popularity and JQuery is remains one of the most popular JavaScript frameworks. Consequently you will typically want to use them together in the same project at one stage or another.
If you do want to use JQuery within your TypeScript files in Visual Studio 2017, you need a "DefinitelyTyped" definition specifically for jQuery. This will allow Visual Studio to correctly recognise jQuery objects when validating and compiling (or transpiling based on your preferred terminology).


To do this, just download the TypeScript definitions (d.ts) for jQuery through the Nuget Package Manager (npm) UI or with the Nuget package manager console, use the following command:

Install-Package jquery.TypeScript.DefinitelyTyped

Now the typescript compiler will recognise your jQuery calls:


DDK

Forcing Synchronization of Display Name and Email from Active Directory without User Profile Synchronization - PowerShell Script

$
0
0
Just made this script to Synchronize Display Name and Email for all users in a root web if they have been updated in AD and aren't flowing through to your display name in SharePoint. This may be required if the user profile service is not set up or is failing. This problem is discussed in more detail at https://gallery.technet.microsoft.com/office/User-Information-List-in-8b420e8c

Add-PSSnapin "Microsoft.SharePoint.PowerShell"
#As discussed in https://gallery.technet.microsoft.com/office/User-Information-List-in-8b420e8c

Write-Host -ForegroundColor Yellow "-------------------Process Start---------------------------------------------------------------"
Write-Host -ForegroundColor Yellow "This script will sync the AD display name and email from AD without running a user profile sync"
Write-Host -ForegroundColor Yellow "As discussed in https://gallery.technet.microsoft.com/office/User-Information-List-in-8b420e8c"
Write-Host -ForegroundColor Yellow "-------------------Process Start---------------------------------------------------------------"

$stopWatch = [Diagnostics.Stopwatch]::StartNew()

$rootWeb = Get-SPWeb "https://demo01.berkeleyit.com/"
ForEach ($user in $rootWeb.AllUsers)
{
Write-Host -ForegroundColor Green "Syncing Email and DisplayName with Active Directory... for $user"
Set-SPUser -Web $rootWeb -identity $user.UserLogin -SyncFromAD
}

$stopWatch.Stop()

$timeTaken = $stopWatch.Elapsed

Write-Host -ForegroundColor Yellow "-------------------Process Completed in $timeTaken second(s)------------------"

Deleting Azure Active Directory Tenant – Unable to delete all Enterprise Applications - Can't Delete Azure DevOps from within User Interface

$
0
0
Encountered an issue today with removal of an Azure AD Tenant that is no longer used. When attempting to delete the Azure AD Directory - I simply received warnings that I had to "Delete All Enterprise Applications" - with a warning status indicator.

When I tried to remove the single Azure Enterprise Application (Azure DevOps) - the Delete button was disabled. As you could imagine - this put me in a bit of a pickle!

The fix that worked for me is as follows:

1. Create a new Global Admin account in the Azure Active Directory you are trying to delete. Make sure you copy the temporary password as you'll need to log in with it.

2. To ensure you have the Azure AD Powershell extensions, Start Powershell and run:
Install-Module -Name AzureAD

3. Once done run Connect-AzureAD. You will be prompted to login. Login with the user you created and will be asked to change your password.


4. Run
Get-AzureADServicePrincipalto retrieve the Object Id of the Enterprise Application that you can't delete.

5. Run
Remove-AzureADServicePrincipal -objectid [enter objectid here] directly.

6. Remove your temporary user.

You should now be able to delete your Azure Active Directory (Azure AD) Tenant instance.

Source: https://blogs.msdn.microsoft.com/kennethteo/2017/09/19/deleting-azure-ad-tenant/

List of Azure Region Codes for Azure 2019 DevOps Migration Tool (and TFSMigrator Tool)

$
0
0
Whilst using the Azure DevOps 2019 migration tool to move from an on-premise DevOps server to the cloud, you will be required to enter the desired destination region. Below is a list of all the valid entries as at June 2019:

CC = Central Canada
WEU = Western Europe
EA = East Asia
EAU = East Australia
CUS = Central US
MA = South India
SBR = South Brazil
WCUS = West Central US
UKS = UK South
EUS = East US
NCUS = North Central US
SCUS = South Central US
WUS2 = West US 2
GH = ?
EUS2 = East US 2

These values appear to come from the server and are not embedded in the tool - otherwise I'd be able to use Reflection to get more information! These region codes seem to be undocumented by Microsoft at present.

[Update - documentation has some more details - but doesn't cover off all available Region options - https://docs.microsoft.com/en-us/azure/devops/migrate/migration-import?view=azure-devops#supported-azure-regions-for-import)]

Basic Guidelines for Product Offering Go/No-Go Decisions (Including Product Fit/Market Fit)

$
0
0
I've worked for Software/IT Consulting companies, Product Development Companies and System/Service Integration companies along my career path. Most recently I've noticed that some of the basic decision making around what products and service offerings that should be developed have missed some critical gateways that resulted in full or partial product failure (i.e. it doesn't make a good return on investment or ever turn a profit).

Often, component licensing costs are ignored or forgotten or the actual pricing is something that the market cannot bear. Sometimes this is due to lack of multi-tenancy support so the product offering economics are not scalable.

Licensing and subscription costs may go down over time (esp with AWS and Azure services becoming gradually cheaper as they reach greater economies of scale and proportional levels of competition). However, this may not happen quickly enough over the product lifetime to deliver profitability. In this case service offerings/products need to be "end-of-lifed" ("EOL'd") or migrated to new platforms and using components with lower cost structures.

Going back to basics, I put this diagram together to outline some key principles of product offer development which should be considered as gateways when deciding to bring a product to market.

What makes a product worthwhile? It starts with being something customers want to buy (and buy enough of). If you find this sweet spot, then you have product/market fit - which means you're no longer pushing your product onto customers. You also need to have a clear vision - otherwise delivery will be problematic when building your product out. There are many articles on product and market fit available - these are just some of my ideas that resonate based on recent experience.

In particular, critical profitability constraints can be forgotten when that "cool new tech" comes out or "everyone else is doing this in the market":


What is clear is that product-market fit is an ideal - but not a sufficient indicator of whether a product should go to market. There are other factors to be considered including cost structures and viability of ongoing product development and marketing to maintain that fit, customer value and (hopefully) margins.  


NIST 800-207 - What is Zero Trust Architecture (ZTA) and Why Has It Become Important?

$
0
0
One of the primary concerns, when operating in cloud environments and accessing resources over the internet, is cybersecurity. Traditional firewalls and edge-approaches to security no longer align with how we use technology.

This has given rise to the recent release of the National Institute of Standards and Technology (NIST) 800-207 security draft https://csrc.nist.gov/publications/detail/sp/800-207/draft. The release of this document has highlighted the prominence that has come to the Zero Trust approach to network security. Zero trust is a necessary security model that has arisen due to evolving user and mobility expectations and the rise of different software and infrastructure delivery models such as the cloud.

What is Zero Trust Security?

  • Zero Trust assumes that there is no implicit trust based on a user's or resource's location (e.g. intranet or intranet). Normal perimeter or edge-based security approaches segment the network this way in a static way based on location, subnets and IP ranges.
  • Zero trust security focuses more on protecting the resources and users inside and outside those network boundaries. 
  • Zero Trust is a more granular and flexible approach to securing resources reflective of the reality of modern workplaces. 
  • Zero Trust typically uses the following parameters in combination to determine policy-based access to resources:
    • User Identity
    • Device
    • Location
    • Session Risk (such as anomalous/unusual access behaviors or times)


Why has it become important?

  • The rise of working from home, remote users, and Bring Your Own Devices (BYOD) and cloud-based services (e.g. Salesforce, Office 365, Microsoft Teams and other AWS, Azure and GCP-based applications) have led to resources and users being located outside traditional network boundaries. 
  • Consequently, authentication and authorization cannot be assumed to be valid just because of the source location of a request - credentials and associated tokens need to be validated independently of location. 
  • Zero Trust is also required because of greater awareness of the "Insider Threat" from contractors and employees - through negligence or malicious intent.

Why is it difficult?

  • Zero Trust requires a much better understanding of the assets and resources that need protection and the behavior of the users consuming and accessing those resources. 
  • Phenomena such as "Shadow IT" also introduce problems because they are not visible and so Zero Trust approaches may actually exclude previously functioning devices from resource access. 
  • Zero Trust requires the creation of more refined corporate and technical policies to handle the more granular resource-based approach to accessing your critical corporate systems.

APRA CPS 234 - Summary of Security Compliance Requirements

$
0
0

In my work with NTT, I've recently been dealing with several FSI-based (Financial Services Industry) organisations who have to comply with the
Australian Prudential Regulation Authority (APRA) Standard CPS 234 July 2019. Here's a brief overview of what that compliance with CPS 234 entails:
  1. APRA CPS 234 is Cybersecurity 101 for Banks, Insurers and related institutions.
  2. As with standards like ISO27001:2013, it is a risk-based approach about ensuring that adequate CIA (Confidentiality, Integrity and Availability) is maintained for information assets.
  3. The Board is ultimately responsible for ensuring appropriately robust policies and controls are in place for both the organisation and 3rd party contractors.
  4. Per basic concepts in CISSP (Certified Information Systems Security Professional), controls should really only be implemented if the cost of control implementation is less than the costs of the data being lost/breached.
  5. To this effect - the information security capability basically has to pass the "reasonableness" test
    1. The security capability should match the size and extent of threats
    2. The controls should match criticality and sensitivity of the assets
  6. CPS 234 aims to ensure that an APRA-regulated entity takes measures to be resilient against information security incidents (including cyberattacks) by maintaining an information security capability commensurate with information security vulnerabilities and threats:
  • Know your responsibilities (The board is ultimately responsible).
  • Know what you have and protect it appropriately. An APRA-regulated entity must classify its information assets, including those managed by related parties and third parties, by criticality and sensitivity.
    • Ideally - should be using Azure Info protection or similar to provide security labels (e.g. classification, sensitivity or dissemination limiting markers) to drive preventative and detective controls. 
  1. Detect and React appropriately:
    • Have Incident Plans and RACIs (Responsible, Accountable, Consult, Inform) in terms of response
    • Appropriately skilled people to detect incidents. This requires user awareness and security practices.
    • Notification of breach within 72 hours.
      • Implies that proper threat detection (pro-active) and monitoring systems should be in place. If you don't know it's happening then you can't comply.
  2. Test and Audit Regularly. Must test effectiveness of controls with a systematic testing program - that is run at least annually.
      • This lends itself to regular, automated (static/dynamic) testing.

    It is always critical to keep in mind that threats come from both threat actors inside (insider threat) and outside the organisation (organised or individual actors) - which lends itself to zero trust approaches to cybersecurity.


    CalDigit TS3 Plus Thunderbolt 3 Docking Station - Issues with Windows 10 USB Devices

    $
    0
    0

    Background:
    I've been having a few USB connection and power issues with the CalDigit TS3 Plus Docking Station (even after the January 2021 version 44.1 firmware update from CalDigit themselves). This is especially the case when I power up the laptop separately from the dock and then plug it in whilst still on.

    The Problem:
    The display adapters would work - but USB connectivity and audio was failing  - even after plugging and unplugging USB and associated devices and powering down the hub. All USB devices wouldn't even power up when the issue was in effect.

    Discovery/Resolution Steps: 
    The only thing that would fix it (most of the time) was a full power down restart.

    Looking at Device Manager - I was getting a Code 31 saying that the "Object Name Already Exists". In the Device Event history, the following error kept appearing:

    Device PCI\VEN_1B73&DEV_1100&SUBSYS_11061AB6&REV_10\8&1b6ac812&0&0000000800E0 was not migrated due to partial or ambiguous match.

    Uninstalling and reinstalling these Generic USB Host Controllers "USB xHCI Compliant Host Controller" didn't work.


    There is a teardown video of this dock with details of all the chips/controllers inside that gave me an idea - https://www.youtube.com/watch?v=8f6Zs1JyZBQ. Looking up the Vendor and Device details in the red error above, it seems that the USB Controller Chip used in the CalDigit docking station is the Fresco Logic xHCI 1100 (USB3) Controller.

    After a short search I found the following device driver page for that company to try (rather than the Generic Microsoft xHCI Host Controller driver) - https://support.frescologic.com/portal/en/kb/articles/latest-drivers  

    Once I installed this, the device was correctly recognised in Windows and no reboot required. It has worked without issue post installation of the Fresco driver (fingers crossed!). I believe that this Fresco Driver installation should go on the CalDigit support page to resolve this issue as the default Generic Drivers seem to have problems.

    Best Practices for Azure Multifactor Authentication (MFA)

    $
    0
    0

    When configuring Azure MFA and Conditional Access there is the potential to lock out all users from the system including the Azure Portal. As with any security control/mechanism, the costs of implementation and maintenance always need to be commensurate with the risks and costs of not implementing the control (e.g. assets at risk, reputational risk).

    With this in mind, here are some key best practices you should follow when enabling MFA:

    1. Ensure that end users are informed adequately that MFA is coming as it can negatively affect the user experience and cause confusion. Microsoft provides communication templates and end user documentation for this purpose - Microsoft provides communication templates and user documentation - per (per https://www.microsoft.com/security/blog/2020/01/15/how-to-implement-multi-factor-authentication/)  
    2. Always grant exclusions for every MFA policy - this will ensure there is always an MFA backdoor so you don't completely lock yourself out (especially if conditional access rules apply to all apps or the Azure portal). When enabling conditional access, make sure exclusions are made for 
      1. Administrators 
      2. Support staff.
      3. Any trusted IPs and known IP addresses/named locations.

    3. Testing - Use what-if policies to test effective permissions when making changes.
    4. Pilot changes using select groups to apply and test MFA policies.
    5. Don't block users who report fraud as users can lock themselves out (though this is less secure there is a danger of false positives). 
    6. Don't use MFA portal and Conditional access at same time - It's not a good idea to use MFA through the MFA control panel as well as conditional access. Disable user accounts for MFA management in the MFA portal prior to if you are using conditional access - otherwise you'll have 2 competing rulesets.
    7. Use Azure Identity Protection (IdP) - as good way to ensure users are forced to register MFA (MFA needs to be configured first) and to ensure MFA coverage. Also allows notifications, blocks or requires MFA when administrative accounts are logged into during high sign-in risk activities such as when seeing anomalous travel of sign-ins. 


    React - Error with create-react-app in Windows Environments - "Error: EEXIST: file already exists, mkdir 'C:\Users\XXXXX" - Fix/Solution

    $
    0
    0

    I've been asked this a more than once - so thought it prudent to document the current recommended workaround for this issue. While it is possible to install react apps globally to work around the error, it is not recommended per https://create-react-app.dev/docs/getting-started/ 

    "If you've previously installed create-react-app globally via npm install -g create-react-app, we recommend you uninstall the package using npm uninstall -g create-react-app or yarn global remove create-react-app to ensure that npx always uses the latest version."

    Preconditions:

    • The windows user name has spaces in it (e.g. c:\User\David Klein)

    Steps to reproduce error/issue:

    1. User attempts to create react app with npx in a Windows environment:
      npx create-react-app my-app
    2. Error is generated:
    Diagnostics:
    1. Seems that the npm cache path will consequently have a space in the path and npm (or more likely create-react-app) can't handle this.

    Solution (until bug is fixed):
    1. Get the current cache path with "npm config get cache"
    2. cd to the c:\Users\ directory and run dir /x to get the short name of that folder (e.g. DAVIDK~1
    3. Once you have the short path, set the npm cache to use that path with the following:
      npm config set cache c:\users\davidk~1\AppData\Roaming\npm-cache --global
    You should now be free to continue with your react app goodness in Windows:







    Viewing all 56 articles
    Browse latest View live