ax 2012r3 technical features

Microsoft Dynamics AX 2012 R3 technical
Microsoft Dynamics AX 2012 R3 technical features
There are some major changes in AX 2012 R3 as follows
1.Automated deployment of AX 2012R3 in Windows Azure
2.Building Microsoft AX services integration with the Microsoft Windows
Azure Service Bus
3.Data synchronization to multiple instances of Microsoft Dynamics AX
4.Optimizing the performance of Microsoft Dynamics AX deployment
5.Create Microsoft Dynamics AX builds using the new X++ server-side
parallel compiler
Automated Deployment
with Windows Azure - AX
2012 R3
Microsoft said that it was moving to a cloud
environment where it would provide a service
to host and run AX instances in the cloud. At
this stage, Microsoft offers a wide variety of
services to an organization that provide a welldesigned infrastructure for development,
testing and small-scale production
The following is the Azure Hosting Model
(Blue boxes is what Azure is proving us)
IaaS(Infrastructure as a service), as the name suggests, provides you the
computing infrastructure, physical or (quite often) virtual machines and other
resources like virtual-machine disk image library, block and file-based storage,
firewalls, load balancers, IP addresses, virtual local area networks etc. Examples :
Amazon EC2, Windows Azure, Rackspace, Google Compute Engine.
PaaS(Platform as a service), as the name suggests, provides you computing
platforms which typically includes operating system, programming language
execution environment, database, web server etc. Examples : AWS Elastic
Beanstalk, Windows Azure, Heroku,, Google App Engine.
While in Saas(Software as a service) model you are provided with access to
application software's often referred to as on-demand software's. You don't have
to worry about the installation, setup and running of the application. Service
provider will do that for you. You just have to pay and use it through some
client. Examples : Google Apps, Microsoft Office 365.
Azure Deployment Services
Post Deployment Considerations
Although Azure does many of the configuration tasks automatically, there
are a number of post-deployment actions we need to follow up after each
setup. The following describes the steps needed after deployment
Create Dynamics AX Builds using the X++ Server
- AX 2012 R3
How the Microsoft Dynamics AX compiler
The following depicts the phases of the X++
compiler in previous versions of Microsoft
Dynamics AX.
How the Microsoft Dynamics AX compiler works
It is important to note that in earlier versions of
Microsoft Dynamics AX the build performance
is affected by the metadata moving from the
client to the server metadata. In addition, the
long compiling times are due to the
deserialization of metadata and in memory
The following is the architecture for the current compiler
R3 Compiler Improvements
Microsoft enhanced the compiler by allowing
us to use the Build.exe command or the client.
However, from an architectural point of view,
they removed the client portion of the compiler
in the R3 release.
The following is the new architecture
A few key points to underline is that the AOS now contains the logging
information, therefore there is no cache in memory. In addition, logs are
generated in each AOS. In case of a multi AOS deployment scenario, the
AXBuild.exe process automatically consolidate these into one log.
Finally, when using the parallel compiler, the CPU usage is extremely
high. In a multi CPU scenario, the AXBuild.exe process will
automatically balance the load between CPU’s. Also, is important to
understand that parallel does not mean multi-threading, the new
compiler is very much still a single-thread process.
The following picture depicts what a parallel compiler output looks like
Microsoft Dynamics AX 2012 R3 - New Data
Import Export Framework Changes
one of the few new key features is that the
DIXF[Data Import Export Framework Changes] runs
on top of the SSIS service interface allowing
incremental runs (UPSERT). Of course, it can
import/export data, and Microsoft added the
capability to compare and copy data between
instances as well. In addition, the new DIXF
version ships with the ability to choose
different data sources such as text, MS Excel,
and XML files.
Further, the new DIXF can be used to extract data directly from various ODBC***
sources such as SQL, MS Access, and MS Excel. This new additions will help us
streamline our data migrations and data transfers much better.
***For ODBC types we are going to have to provide a connection string in order to
simplify the data selection process. The one cool thing I saw was that we can create
new rows under Mapping Details to add mandatory fields i.e. ACCOUNTNUM in
case a specific legacy system does not include it.
When this scenario is true, the custom value provided can be automatically filled
by a number sequence value (if we want to) by choosing the “AUTO” option in
that specific row, which would take a new AccountNum from the numbering
sequence system. However, we can also choose to have default values as in older
In terms of the DIXF entities, the new DIXF ships with a 150 entities in comparison
to the 78 (I think) it came with in earlier versions. These include master data,
documents, journals, parameters, party, products, and configuration data.
Another cool addition is the addition of folder support. We are going to be able to
move stuff around automatically (needs to be pre-defined) to different folders in
our domain based on the operations we are executing.
The following are a few other additions:
Parallel Execution: Ability to dissect data in bundles (i.e. 1,000
rows / bundles = 100 rows per task).
This is particularly useful when large data loads need to take
place. The tool provides the ability to allocate a group of
records to tasks. This combination will create a bundle, and
each bundle is independent of each other. See the following
diagram for a visual representation of it:
Role Base Security: Provides a security framework for the different levels
on an organization, this is built on top of the existing security framework
(i.e. Production cannot import HR data).
Mapper Control: Allows flexible mapping between custom entities and
staging objects. In addition, mapping effort is reduced when using AX
friendly column names (i.e. ITEMID).
Custom Entity Wizard: We can compare data in different companies. This
becomes specially interesting and useful to compare parameter data
between a gold and test instances for example. When using this tool to
import data that contains discrepancies, the system inserts the data into a
staging table where it is compared by another process in a specific number
of companies and/or instances, and finally it gets updated.
At this point, a user can use the Comparison form to move records
between different instances.
Parallel Execution
System Configuration Data: BI-Analysis and Reporting Services, Project Server
Integration, EP, Batch Settings, Number Sequences, Email Parameters, AIF, System
Service Accounts.
DIXF Import Process
The import Process us done by using an abstraction layer that uses SSIS behind the
DIXF framework. Within this abstraction layer, we can add possible X++
I asked the question on what would be the recommendation for migrating data
from legacy systems – the following is what I could get from their
recommendation (I was taking notes).There are two types of data migration
architecture that consolidate both importing and cleansing data.
The first option is to have a middle tier that can process the data from a legacy
system, to an external system and clean it before it goes to Microsoft Dynamics
The second option is to do
it directly import the data
from a legacy system to
Microsoft Dynamics AX.
Thank you.
Thank You.

similar documents