Skip to main content

Data & Security Governance

Security

Encryption

All data will be encrypted while in transit using standard one-way TLS or if available, mutual two-way encryption. The latest TLS version will be used to block the usage of weaker ciphers. At-rest data will be encrypted AES 256 bit.

Authentication

The preferred method of authentication will be OAuth2. If OAuth2 is not available, an API key will be used.

Monitoring/Logging

Continual monitoring will be put in place with specified thresholds for alerts. Dashboards will be created to track API consumption, success/error rate, transaction time etc. The API will also be versioned allowing more than 1 API to work while the old one is being deprecated.

Infrastructure

Azure Cloud infrastructure and resources will be used so there are no purchase or maintenance requirements to be performed, only configurations.

API Firewall

A web application firewall will serve as the first line of defense against attacks and will be exposed to the public internet. A second firewall will be behind the API Gateway and in front of the application load balancers in a private network.

API Gateway

The API Gateway will help secure, control, and monitor traffic with exposure to the public internet.

Rate Limits

API rate limits will be applied along with geo-velocity checks and act as an enforcement point for policies such as geo-fencing and I/O content validation and sanitization. Geo-velocity checks provide context-based authentication by determining access based on the speed of travel required between the previous and current login attempts.

Data

Authorization

Access to data will use the least privileged access model. If only read access is needed, then all other permissions will be rejected. API responses should contain the minimum information necessary to fulfill a request. For example, if an employee's age is requested, the date of birth shouldn't be returned as well.

Resilience

Multi-Region

To ensure uptime, the application will be deployed in 2 regions with the primary being hot and the failover being cold. A health check will be used to determine if the primary is down, and the load balancer needs to switch to the failover.

Multi-Instance

At least 2 instances will be running in an auto scaling group such that as the load increases, an additional instance will be created and as the load decreases, the number of instances will shrink.

Performance

Infrastructure

Hardware performance will be monitored using a combination of built-in Azure tools within the Azure Portal as well as IgniteConnex Runtime Dashboards via the IgniteConnex Observability Portal.

Application

A custom IgniteConnex Dashboard will be created to monitor application performance items such as but not limited to request/file transfers per day, average transfer rate, and average request/file size.