Historical Testing Approach

Historically the performance/scale testing was done by the performance test team in a lab in the SMX environment.

SQL Servers for the SM and DW databases and for the management servers all used the same hardware:

  •  8 cores
  • 16 GB of RAM
  • multi-disk arrays for the SMDB and DWDBs data and log files (about 10-14 spindles)
  • data and log files were not separated onto different drive arrays
  • RAID configuration was not optimized for data vs. log read/write patterns
  • TempDB data and log files were left on the OS drive
  • Direct attached storage was used

The data representing the configuration items would be populated into the database using a tool called ‘DataGen’ which would create computers, users, printers, and other CIs to a specified scale. This process would normally take several days to fully populate a database with 50,000 computers.

Following the initial population of the data a tool would be run to simulate load on the system from console “users” performing some work.

The user “experience” of this automated system would be measured versus our performance requirements and reported on each time a perf run was completed. Completing a perf run could take 1-2 weeks.

AD Connectors would be set up to to do initial data syncs.

The data warehouse would be set up to evaluate ETL transaction rate.

Last edited Jul 22, 2013 at 11:17 PM by ChBooth, version 1


No comments yet.