Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exc65

Where are source connector offsets stored?

A.

offset.storage.topic

B.

storage.offset.topic

C.

topic.offset.config

D.

offset, storage, partitions

What is the default maximum size of a message the Apache Kafka broker can accept?

A.

1MB

B.

2MB

C.

5MB

D.

10MB

Which two statements about Kafka Connect Single Message Transforms (SMTs) are correct?

(Select two.)

A.

Multiple SMTs can be chained together and act on source or sink messages.

B.

SMTs are often used to join multiple records from a source data system into a single Kafka record.

C.

Masking data is a good example of an SMT.

D.

SMT functionality is included within Kafka Connect converters.

You are experiencing low throughput from a Java producer.

Metrics show low I/O thread ratio and low I/O thread wait ratio.

What is the most likely cause of the slow producer performance?

A.

Compression is enabled.

B.

The producer is sending large batches of messages.

C.

There is a bad data link layer (layer 2) connection from the producer to the cluster.

D.

The producer code has an expensive callback function.

Your application is consuming from a topic configured with a deserializer.

It needs to be resilient to badly formatted records ("poison pills"). You surround the poll() call with a try/catch for RecordDeserializationException.

You need to log the bad record, skip it, and continue processing.

Which action should you take in the catch block?

A.

Log the bad record, no other action needed.

B.

Log the bad record and seek the consumer to the offset of the next record.

C.

Log the bad record and call the consumer.skip() method.

D.

Throw a runtime exception to trigger a restart of the application.

You create a producer that writes messages about bank account transactions from tens of thousands of different customers into a topic.

    Your consumers must process these messages with low latency and minimize consumer lag

    Processing takes ~6x longer than producing

    Transactions for each bank account must be processedin orderWhich strategy should you use?

A.

Use the timestamp of the message's arrival as its key.

B.

Use the bank account number found in the message as the message key.

C.

Use a combination of the bank account number and the transaction timestamp as the message key.

D.

Use a unique identifier such as a universally unique identifier (UUID) as the message key.

A stream processing application is tracking user activity in online shopping carts.

You want to identify periods of user inactivity.

Which type of Kafka Streams window should you use?

A.

Sliding

B.

Tumbling

C.

Hopping

D.

Session

Which statement is true about how exactly-once semantics (EOS) work in Kafka Streams?

A.

Kafka Streams disables log compaction on internal changelog topics to preserve all state changes for potential recovery.

B.

EOS in Kafka Streams relies on transactional producers to atomically commit state updates to changelog topics and output records to Kafka.

C.

Kafka Streams provides EOS by periodically checkpointing state stores and replaying changelogs to recover only unprocessed messages during failure.

D.

EOS in Kafka Streams is implemented by creating a separate Kafka topic for deduplication of all messages processed by the application.

Your Kafka cluster has five brokers. The topic t1 on the cluster has:

    Two partitions

    Replication factor = 4

    min.insync.replicas = 3You need strong durability guarantees for messages written to topic t1.You configure a producer acks=all and all the replicas for t1 are in-sync.How many brokers need to acknowledge a message before it is considered committed?

A.

2

B.

3

C.

4

D.

5

Which configuration determines how many bytes of data are collected before sending messages to the Kafka broker?

A.

batch.size

B.

max.block.size

C.

buffer.memory

D.

send.buffer.bytes