There are three states in which data can exist:
at dead, in action, in use.
at dormant, in mobile, in use.
at sleep, in awake, in use.
at rest, in transit, in use.
The Answer Is:
DExplanation:
Data is commonly categorized into three states because the threats and protections change depending on where the data is and what is happening to it. Data at rest is stored on a device or system, such as databases, file shares, endpoints, backups, and cloud storage. The main risks are unauthorized access, theft of storage media, misconfigured permissions, and improper disposal. Controls typically include strong access control, encryption at rest with sound key management, secure configuration and hardening, segmentation, and resilient backup protections including restricted access and immutability.
Data in transit is data moving between systems, such as client-to-server traffic, service-to-service connections, API calls, and email routing. The primary risks are interception, alteration, and impersonation through man-in-the-middle techniques. Standard controls include transport encryption (such as TLS), strong authentication and certificate validation, secure network architecture, and monitoring for anomalous connections or data flows.
Data in use is actively processed in memory by applications and users, for example when a document is opened, a record is processed by an application, or data is displayed to a user. This state is challenging because data may be decrypted for processing. Controls include least privilege, strong authentication and session management, endpoint protection, application security controls, and secure development practices, with hardware-backed isolation when required.
The hash function supports data in transit by ensuring:
validation that a message originated from a particular user.
a message was modified in transit.
a public key is transitioned into a private key.
encrypted messages are not shared with another party.
The Answer Is:
BExplanation:
A cryptographic hash function supports data in transit primarily by providingintegrity assurance. When a sender computes a hash (digest) of a message and the receiver recomputes the hash after receipt, the two digests should match if the message arrived unchanged. If the message is altered in any way while traveling across the network—whether by an attacker, a faulty intermediary device, or transmission errors—the recomputed digest will differ from the original. This difference is the key signal that the messagewas modified in transit, which is what option B expresses. In practical secure-transport designs, hashes are typically combined with a secret key or digital signature so an attacker cannot simply modify the message and generate a new valid digest. Examples include HMAC for message authentication and digital signatures that hash the content and then sign the hash with a private key. These mechanisms provide integrity and, when keyed or signed, also provide authentication and non-repudiation properties.
Option A is more specifically about authentication of origin, which requires a keyed construction such as HMAC or a signature scheme; a plain hash alone cannot prove who sent the message. Option C is incorrect because keys are not “converted” from public to private. Option D relates to confidentiality, which is provided by encryption, not hashing. Therefore, the best answer is B because hashing enables detection of message modification during transit.
What does non-repudiation mean in the context of web security?
Ensuring that all traffic between web servers must be securely encrypted
Providing permission to use web server resources according to security policies and specified procedures, so that the activity can be audited
Ensuring that all data has not been altered in an unauthorized manner while being transmitted between web servers
Providing the sender of a message with proof of delivery, and the receiver with proof of the sender's identity
The Answer Is:
DExplanation:
Non-repudiation is a security property that providesverifiable evidenceof an action or communication so that the parties involved cannot credibly deny their participation later. In web security, it most commonly means being able to provewho sent a message or performed a transactionand, in many cases, that the message was received and recorded. This is why option D is correct: it captures the idea of giving the receiver proof of the sender’s identity and giving the sender evidence that the message or transaction was delivered or accepted.
Cybersecurity guidance typically associates non-repudiation withdigital signatures, strong identity binding, and protected audit evidence. A digital signature uses asymmetric cryptography so that only the holder of a private key can sign, while anyone with the public key can verify the signature. When combined with trusted certificates, accurate time sources, and protected logs, this creates strong accountability. Non-repudiation also depends on maintaining the integrity of supporting evidence, such as tamper-resistant audit logs, secure log retention, and controlled access to signing keys.
It is different from confidentiality (encryption of traffic), and different from integrity alone (preventing unauthorized modification). It is also different from authorization and auditing, which support accountability but do not, by themselves, provide cryptographic-grade proof that a specific entity performed a specific action. Non-repudiation is especially important for high-trust transactions such as approvals, payments, and legally binding communications.
Violations of the EU’s General Data Protection Regulations GDPR can result in:
mandatory upgrades of the security infrastructure.
fines of €20 million or 4% of annual turnover, whichever is less.
fines of €20 million or 4% of annual turnover, whichever is greater.
a complete audit of the enterprise’s security processes.
The Answer Is:
CExplanation:
The GDPR establishes a regulatory penalty framework intended to make privacy and data-protection obligations enforceable across organizations of any size. Under GDPR, the most severe administrative fines can reachup to €20 million or up to 4% of the organization’s total worldwide annual turnover of the preceding financial year, whichever is higher. That “whichever is greater” clause is critical: it prevents large enterprises from treating privacy violations as a minor cost of doing business and ensures the sanction can scale with the organization’s economic size and risk impact.
Cybersecurity governance and risk documents typically emphasize GDPR as a driver for enterprise risk management because the consequences extend beyond monetary fines. A confirmed violation often triggers regulatory investigations, mandatory corrective actions, and potential restrictions on processing activities. Organizations may also face indirect impacts such as breach notification costs, legal claims from affected individuals, reputational harm, loss of customer trust, and increased oversight by regulators and auditors.
From a controls perspective, GDPR penalties reinforce the need for strong security and privacy-by-design practices: data minimization, lawful processing, documented purposes, retention controls, encryption where appropriate, access control and least privilege, monitoring and incident response readiness, and evidence-based accountability through policies, records, and audit trails. Selecting option C correctly reflects GDPR’s maximum fine structure and its risk-based deterrence model.
Why would a Business Analyst include current technology when documenting the current state business processes surrounding a solution being replaced?
To ensure the future state business processes are included in user training
To identify potential security impacts to integrated systems within the value chain
To identify and meet internal security governance requirements
To classify the data elements so that information confidentiality, integrity, and availability are protected
The Answer Is:
BExplanation:
A Business Analyst documents current technology in the “as-is” state because business processes are rarely isolated; they depend on applications, interfaces, data exchanges, identity services, and shared infrastructure. From a cybersecurity perspective, replacing one solution can unintentionally change trust boundaries, authentication flows, authorization decisions, logging coverage, and data movement across integrated systems. Option B is correct because understanding the current technology landscape helps identify where security impacts may occur across the value chain, including upstream data providers, downstream consumers, third-party services, and internal platforms that rely on the existing system.
Cybersecurity documents emphasize that integration points are common attack surfaces. APIs, file transfers, message queues, single sign-on, batch jobs, and shared databases can introduce risks such as broken access control, insecure data transmission, data leakage, privilege escalation, and gaps in monitoring. If the BA captures current integrations, dependencies, and data flows, the delivery team can properly perform threat modeling, define security requirements, and avoid breaking compensating controls that other systems depend on. This also supports planning for secure decommissioning, migration, and cutover, ensuring credentials, keys, service accounts, and network paths are rotated or removed appropriately.
The other options are less precise for the question. Training is not the core driver for documenting current technology. Governance requirements apply broadly but do not explain why current tech must be included. Data classification is important, but it is a separate activity from capturing technology dependencies needed to assess integration security impacts.
What business analysis deliverable would be an essential input when designing an audit log report?
Access Control Requirements
Risk Log
Future State Business Process
Internal Audit Report
The Answer Is:
AExplanation:
Designing an audit log report requires clarity onwho is allowed to do what, which actions are considered security-relevant, and what evidence must be captured to demonstrate accountability.Access Control Requirementsare the essential business analysis deliverable because they define roles, permissions, segregation of duties, privileged functions, approval workflows, and the conditions under which access is granted or denied. From these requirements, the logging design can specify exactly which events must be recorded, such as authentication attempts, authorization decisions, privilege elevation, administrative changes, access to sensitive records, data exports, configuration changes, and failed access attempts. They also help determine how logs should attribute actions to unique identities, including service accounts and delegated administration, which is critical for auditability and non-repudiation.
Access control requirements also drive necessary log fields and report structure: user or role, timestamp, source, target object, action, outcome, and reason codes for denials or policy exceptions. Without these requirements, an audit log report can become either too sparse to support investigations and compliance, or too noisy to be operationally useful.
A risk log can influence priorities, but it does not define the authoritative set of access events and entitlements that must be auditable. A future state process can provide context, yet it is not as precise as access rules for determining what to log. An internal audit report may highlight gaps, but it is not the primary design input compared to formal access control requirements.
Separation of duties, as a security principle, is intended to:
optimize security application performance.
ensure that all security systems are integrated.
balance user workload.
prevent fraud and error.
The Answer Is:
DExplanation:
Separation of duties is a foundational access-control and governance principle designed to reduce the likelihood of misuse, fraud, and significant mistakes by ensuring thatno single individual can complete a critical process end-to-end without independent oversight. Cybersecurity and audit frameworks describe this as splitting high-risk activities into distinct roles so that one person’s actions are checked or complemented by another person’s authority. This limits both intentional abuse, such as unauthorized payments or data manipulation, and unintentional errors, such as misconfigurations or accidental deletion of important records.
In practice, separation of duties is implemented by defining roles and permissions so that incompatible functions are not assigned to the same account. Common examples include separating the ability to create a vendor from the ability to approve payments, separating software development from production deployment, and separating system administration from security monitoring or audit log management. This is reinforced through role-based access control, approval workflows, privileged access management, and periodic access reviews that detect conflicting entitlements and privilege creep.
The value of separation of duties is risk reduction through accountability and control. When actions require multiple parties or independent review, it becomes harder for a single compromised account or malicious insider to cause large harm without detection. It also improves reliability by introducing checkpoints that catch mistakes earlier. Therefore, the correct purpose is to prevent fraud and error.
How is a risk score calculated?
Based on the confidentiality, integrity, and availability characteristics of the system
Based on the combination of probability and impact
Based on past experience regarding the risk
Based on an assessment of threats by the cyber security team
The Answer Is:
BExplanation:
A risk score is commonly calculated by combining two core factors: how likely a risk scenario is to occur and how severe the consequences would be if it did occur. This is often described in cybersecurity risk documentation as likelihood times impact, or as a structured mapping using a risk matrix.Probability or likelihoodreflects the chance that a threat event will exploit a vulnerability under current conditions. It may consider elements such as threat activity, exposure, ease of exploitation, control strength, and historical incident patterns.Impactreflects the magnitude of harm to the organization, usually measured across business disruption, financial loss, legal or regulatory exposure, reputational damage, and harm to confidentiality, integrity, or availability.
While confidentiality, integrity, and availability are essential for understanding what matters and can influence impact ratings, they are typically inputs into impact determination rather than the full scoring method by themselves. Past experience and expert threat assessment can inform likelihood estimates, but they are not the standard calculation model on their own. The key concept is that risk must reflect both chance and consequence; a highly impactful event with very low likelihood may be scored similarly to a moderate impact event with high likelihood depending on the organization’s methodology.
Therefore, the most accurate description of how a risk score is calculated is the combination of probability and impact, enabling prioritization and consistent risk treatment decisions.
What should organizations do with Key Risk Indicator KRI and Key Performance Indicator KPI data to facilitate decision making, and improve performance and accountability?
Achieve, reset, and evaluate
Collect, analyze, and report
Prioritize, falsify, and report
Challenge, compare, and revise
The Answer Is:
BExplanation:
KRIs and KPIs are only useful when they are handled as part of a disciplined measurement lifecycle. Cybersecurity governance guidance emphasizes three essential activities:collect,analyze, andreport. Organizations must firstcollectKRI and KPI data consistently from reliable sources such as vulnerability scanners, SIEM logs, IAM systems, ticketing platforms, and asset inventories. Collection requires defined metric owners, clear definitions, standardized time windows, and data quality checks so results are comparable across periods and business units.
Next, organizationsanalyzethe data to understand what it means for risk and performance. Analysis includes trending over time, comparing results to targets and thresholds, correlating indicators to business outcomes, identifying outliers, and determining root causes. For KRIs, analysis highlights rising exposure or control breakdowns such as increasing critical vulnerabilities beyond SLA. For KPIs, analysis evaluates operational effectiveness such as mean time to detect and mean time to remediate.
Finally, organizationsreportresults to the right audiences with the right level of detail. Reporting supports accountability by assigning actions, tracking remediation progress, and escalating when thresholds are exceeded. It also supports decision making by showing where investment, staffing, or control changes will have the greatest risk-reduction and performance impact. The other options are not standard, auditable metric management activities and do not reflect the established lifecycle used in cybersecurity measurement programs.
How does Transport Layer Security ensure the reliability of a connection?
By ensuring a stateful connection between client and server
By conducting a message integrity check to prevent loss or alteration of the message
By ensuring communications use TCP/IP
By using public and private keys to verify the identities of the parties to the data transfer
The Answer Is:
BExplanation:
Transport Layer Security (TLS) strengthens the trustworthiness of application communications by ensuring that data exchanged over an untrusted network is not silently modified and is coming from the expected endpoint. While TCP provides delivery features such as sequencing and retransmission, TLS contributes to what many cybersecurity documents describe as “reliable” secure communication by addingcryptographic integrity protections. TLS uses integrity checks (such as message authentication codes in older versions/cipher suites, or authenticated encryption modes like AES-GCM and ChaCha20-Poly1305 in modern TLS) so that any alteration of data in transit is detected. If an attacker intercepts traffic and tries to change commands, session data, or application content, the integrity verification fails and the connection is typically terminated, preventing corrupted or manipulated messages from being accepted as valid.
This is distinct from merely being “stateful” (a transport-layer property) or “using TCP/IP” (a networking stack choice). TLS can run over TCP and relies on TCP for delivery reliability, but TLS itself is focused onconfidentiality, integrity, and endpoint authentication. Public/private keys and certificates are used during the TLS handshake to authenticate servers (and optionally clients) and to establish shared session keys, but the ongoing protection that prevents undetected tampering is the integrity check on each protected record. Therefore, the best match to how TLS ensures secure, dependable communication is the message integrity mechanism described in option B.