之前一直有项目用到,不过我并不负责这一块,所以了解不多,这次趁机会学习下。
之前对kafka的了解其实仅限于知道它是一个分布式消息系统,这次详细了解了下,知道了一些关键概念(topic主题、broker服务、producers消息发布者、consumer消息订阅者消费者),具体网上一大堆,这里不赘述,直接开始代码。
###1.引入包
1 2 3 4 5
| <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.10</artifactId> <version>0.10.0.0</version> </dependency>
|
实际上我倒不是以上面方式引入的,因为使用kafka还是为了后面跟spark steaming集成,所以我是引入的spark-streaming-kafka,依赖包自然会被引入,需求相同的话可以像下面这样引入。
1 2 3 4 5
| <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-kafka-0-10_2.11</artifactId> <version>2.0.0</version> </dependency>
|
###2.发布者类Producer
这里使用KafkaProducer类,官方已经不建议使用Producer类,实现一个线程类,进行消息发布,实际的代码其实很简单,不过本来也就是要一个demo。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
| import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerRecord;
public class UserKafkaProducer extends Thread { private final KafkaProducer<Integer, String> producer; private final String topic; private final Properties props = new Properties(); public UserKafkaProducer(String topic) { props.put("metadata.broker.list", "master2:6667"); props.put("bootstrap.servers", "master2:6667"); props.put("acks", "all"); props.put("retries", 0); props.put("batch.size", 16384); props.put("linger.ms", 1); props.put("buffer.memory", 33554432); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); producer = new KafkaProducer<Integer, String>(props); this.topic = topic; } @Override public void run() { int messageNo = 1; while (true) { String messageStr = new String("Message_" + messageNo); System.out.println("Send:" + messageStr); producer.send(new ProducerRecord<Integer, String>(topic, messageStr)); messageNo++; try { sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } } } }
|
###3.消息消费者类Consumer
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
| import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties;
import kafka.consumer.ConsumerConfig; import kafka.javaapi.consumer.ConsumerConnector; import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream;
public class UserKafkaConsumer extends Thread { private final ConsumerConnector consumer; private final String topic; public UserKafkaConsumer(String topic) { consumer = kafka.consumer.Consumer.createJavaConsumerConnector( createConsumerConfig()); this.topic = topic; } private static ConsumerConfig createConsumerConfig() { Properties props = new Properties(); props.put("zookeeper.connect", "master1:2181,master2:2181"); props.put("group.id", "group1"); props.put("zookeeper.session.timeout.ms", "40000"); props.put("zookeeper.sync.time.ms", "200"); props.put("auto.commit.interval.ms", "1000"); return new ConsumerConfig(props); } @Override public void run() { Map<String, Integer> topicCountMap = new HashMap<String, Integer>(); topicCountMap.put(topic, new Integer(1)); Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap); KafkaStream<byte[], byte[]> stream = consumerMap.get(topic).get(0); ConsumerIterator<byte[], byte[]> it = stream.iterator(); while (it.hasNext()) { System.out.println("receive:" + new String(it.next().message())); try { sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } } } }
|
###4.简单示例
1 2 3 4 5 6 7
| public static void main(String[] args) { UserKafkaProducer producerThread = new UserKafkaProducer(KafkaProperties.topic); producerThread.start(); UserKafkaConsumer consumerThread = new UserKafkaConsumer(KafkaProperties.topic); consumerThread.start(); }
|
运行即可。