Creating a NAV Server Availability Set using the Azure Load Balancer

This post describes the steps needed to setup NAV Virtual Machines in an Availability Set behind an Azure Load Balancer. This provides high availability for an NAV Server and simple load distribution. Before reading you should already be familiar with the NAV Azure Image, the NAV Demo Environment and the Azure Resource Manager. You can read more about the Load Balancer here: Azure Load Balancer Overview

The following Azure PowerShell script creates an Azure Resource Group with two NAV Virtual Machines in an Availability Set and a Load Balancer with rules configured for the NAV demo environment on the Dynamics NAV 2016 gallery image.

Before running the script you need to be connected to your Azure Subscription using the Login-AzureRmAccount cmdlet and you need to update the $testName variable to something unique and meaningful. The script will prompt for the admin credentials for the virtual machines to be created.

When the script completes and the Virtual Machines are created, you can then connect to the virtual machines using via RDP using the Azure Portal and run the “Initialize Virtual Machine” script to create the demo environment. When prompted for the cloud service name you should provide the FQDN for the Load Balancer PublicIP, the FQDN is displayed at the end of running the script. If you are using self-signed certificates you can use the certificate generated for the first virtual machine, when running the script on the second virtual machine.

The Load Balancer has rules that enable requests to the default site on Port 80, the NAV Web Client on port 443 (HTTPS) and the NAV Client Service on port 7046. These ports works with the NAV demo environment on the Dynamics NAV 2016 gallery image.

The Azure Load Balancer distributes requests between the two virtual machines. The Load Balancer rules for port 443 and port 7046 are configured with session persisitence so that once a client creates a session with on of the virtual machines the load balancer continues to direct requests from the client to that virtual machine where the NAV client session has been created.

 

Here is the script which is hosted in a Gist:

 

Differences between NAV Unit Tests and Performance Tests

A recent question on NAVLoadTest that asked about a possible “Combined Unit & load- testing module” using the NAV Application Test Toolset got me thinking about the differences between Application “Unit Tests” and Performance Tests. There are some important differences in the goals and the design of Performance Tests like the NAVLoadTest scenarios and Application Tests written using the NAV Application Test Toolset .

  1. Goals:
    1. Application Tests are designed to verify correct functionality of a module.
    2. Performance Tests are designed to measure some aspect of system performance.
  2. Scope:
    1. Application Tests are designed to test individual C/AL objects and methods in isolation. The tests are executed in the NAV Server only so there is no client-server communication involved.
    2. Performance Tests test end-to-end user interactions. They run using the NAV client service which is hosted in IIS. This means that the tests measure the resources consumed by the client layer, the NAV Server, SQL Server and the communications between those layers.
  3. Data Isolation:
    1. Application Tests are designed to be data-independent and be executed in isolation from other tests. Any changes done to the database through running of tests from the Test Tool are automatically rolled back using the Test Isolation feature.
    2. Performance Tests are dependent on existing data, create persistent data and are impacted by the data created by other tests. One of the goals of the Load Tests Scenarios is to observe how the test scenario performance changes as the dataset grows. One of the hardest parts of writing load test scenarios is ensuring that the test continue to run predictably as the dataset changes.
  4. Test Verification
    1. Application Test tests follow the “Arrange – Act – Assert” (see http://c2.com/cgi/wiki?ArrangeActAssert) pattern of unit tests. They ensure the state has not been changed unexpectedly during the test.
    2. Performance Tests have no way of controlling the initial state as other tests can be running concurrently on the same database. They must be resilient to changes in data and possible errors that occur during test execution and handle them appropriately as a user might. For example the “another user has locked the record” error occurs frequently in load tests when there is a concurrent user load.

There are probably more differences that I didn’t cover. When writing performance tests you may find it easy to start with the scenarios used in some application tests but I find that whenever I attempt to reuse an existing test as a performance test I end up needing to rewrite the test to cover situations that don’t occur in the original test.

Recent Updates to the NAVLoadTest Repository

After returning from Directions I have made 2 updates to the NAVLoadTest repo.

The first change the result of a request made during the Directions workshop to use lookup controls when selecting records randomly instead of opening list pages. The InvokeCatchLookup extension method invokes the Lookup SystemAction and expects to catch a Lookup Form. See “Feature/use control lookups”.

The second change is to add the basic Small Business User scenarios to the project to demonstrate how to use other role centres and pages.  See “Feature/small business scenarios”

NAV Performance Test Toolkit Supports NAV 2016

I have just spent 3 days at Directions EMEA 2015 where I presented the NAV Performance Test Toolkit with Freddy and did a workshop on writing performance tests. The workshop had good attendance and I got some good feedback and suggestions for improvements. You can see an issue created during the session here: Issues.

I have recently updated the NAVLoadTest repository with support for NAV 2016. The changes for NAV 2016 include the references to the updated NAV Client Framework Library and some changes to the authentication code. The Client Framework library appears to have had some significant updates and now uses JSON over HTTP instead of WCF. Take a look at the communications while running the tests using Fiddler if you are interested to see how that works.

Please add your requests for improvements and other feedback to the  NAVLoadTest issues list. This is the main repository for the toolkit and will always contain the latest version of the toolkit. The other repositories won’t change so often as they are used for demonstrations and hands-on labs and need to be kept in synch with the other demonstration materials.

There are a few other improvements made recently, in particular a change that allows you to add more than 5 records to a list (see the NAVLoadTest Commits for details).

It was great to hear from so many people that are using the toolkit. I hope to be able to push some more improvements soon. Stay tuned.

How to Write NAV Load Tests Scenarios Using Visual Studio

I created 2 videos about writing simple load test scenarios for Dynamics NAV using Visual Studio. The first video covers a simple scenario for opening and closing a page. The second video shows how to write a scenario to create a Purchase Order. Both examples also show how to add tests to the Visual Studio Load Test.

The videos are available on You Tube:

Monitoring & Diagnosing Microsoft Dynamics NAV Server Performance

At NAV Tech Days 2014, Dmytro Sitnik and I presented “Monitoring & Diagnosing Microsoft Dynamics NAV Server Performance”. The presentation and video are now available for download at mibuso.com.

The presentation was a great opportunity to see how NAV developers are interested in tools to analyze Dynamics NAV performance. There were also a lot of tough questions afterwards 🙂

What I learnt is that having a few performance counters and the infrastructure for diagnosing NAV performance is not enough. The “proper” way to create benchmarks for a system is to collect data from the performance counters over a statistically significant period of “normal” usage and then analyze the data to define the expected metrics for normal usage. This is time consuming and I think most NAV developers would like to have an easier way of determining whether their system is performing normally and to quickly identify any obvious issues.

There are tools for analyzing SQL server performance that provide a pre-defined set of “normal” performance metrics and configuration parameters to identify any obvious issues. This is not a comprehensive performance analysis but it makes a good starting point for investigations.

What is needed for NAV is a set of guidelines for NAV performance counters and configuration parameters. I will be collecting data from my own investigations to build a set of guidelines.

Let me know if you have any comments or suggestions.