部署Kafka

2023-09-21 22:40:45

kafka:kafka_2.13-3.5.1

NOTE: Your local environment must have Java 8+ installed.

Apache Kafka can be started using ZooKeeper or KRaft. To get started with either configuration follow one the sections below but not both.

1 Windows单机

1.1 Kafka with KRaft

1.1.1 Generate a Cluster UUID 

%kafka_home%\bin\windows\kafka-storage.bat random-uuid

KAFKA_CLUSTER_ID:7W5iXjO2SESIaSv770eIyA

1.1.2 Format Log Directories

server.properties采用默认配置:

############################# Server Basics #############################

# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller

# The node id associated with this instance's roles
node.id=1

# The connect string for the controller quorum
controller.quorum.voters=1@localhost:9093




############################# Socket Server Settings #############################

# The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092,CONTROLLER://:9093

# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT

# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=PLAINTEXT://localhost:9092

# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kraft-combined-logs
%kafka_home%\bin\windows\kafka-storage.bat format -t 7W5iXjO2SESIaSv770eIyA -c ../../config/kraft/server.properties

1.1.3 Start the Kafka Server

%kafka_home%\bin\windows\kafka-server-start.bat ../../config/kraft/server.properties

报错了

[2023-09-20 14:41:35,667] ERROR [SharedServer id=1] Got exception while starting SharedServer (kafka.server.SharedServer)
java.io.UncheckedIOException: Error while writing the Quorum status from the file C:\tmp\kraft-combined-logs\__cluster_metadata-0\quorum-state
        at org.apache.kafka.raft.FileBasedStateStore.writeElectionStateToFile(FileBasedStateStore.java:155)
        at org.apache.kafka.raft.FileBasedStateStore.writeElectionState(FileBasedStateStore.java:128)
        at org.apache.kafka.raft.QuorumState.transitionTo(QuorumState.java:477)
        at org.apache.kafka.raft.QuorumState.initialize(QuorumState.java:212)
        at org.apache.kafka.raft.KafkaRaftClient.initialize(KafkaRaftClient.java:370)
        at kafka.raft.KafkaRaftManager.buildRaftClient(RaftManager.scala:248)
        at kafka.raft.KafkaRaftManager.<init>(RaftManager.scala:174)
        at kafka.server.SharedServer.start(SharedServer.scala:247)
        at kafka.server.SharedServer.startForController(SharedServer.scala:129)
        at kafka.server.ControllerServer.startup(ControllerServer.scala:197)
        at kafka.server.KafkaRaftServer.$anonfun$startup$1(KafkaRaftServer.scala:95)
        at kafka.server.KafkaRaftServer.$anonfun$startup$1$adapted(KafkaRaftServer.scala:95)
        at scala.Option.foreach(Option.scala:437)
        at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:95)
        at kafka.Kafka$.main(Kafka.scala:113)
        at kafka.Kafka.main(Kafka.scala)
Caused by: java.nio.file.FileSystemException: C:\tmp\kraft-combined-logs\__cluster_metadata-0\quorum-state.tmp -> C:\tmp\kraft-combined-logs\__cluster_metadata-0\quorum-state: 另一个程序正在使用此文件,进程无法访问。

        at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
        at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
        at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
        at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
        at java.nio.file.Files.move(Files.java:1395)
        at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:950)
        at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:933)
        at org.apache.kafka.raft.FileBasedStateStore.writeElectionStateToFile(FileBasedStateStore.java:152)
        ... 15 more
        Suppressed: java.nio.file.FileSystemException: C:\tmp\kraft-combined-logs\__cluster_metadata-0\quorum-state.tmp -> C:\tmp\kraft-combined-logs\__cluster_metadata-0\quorum-state: 另一个程序正在使用此文件,进程无法访问。

                at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
                at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
                at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
                at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
                at java.nio.file.Files.move(Files.java:1395)
                at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:947)
                ... 17 more
[2023-09-20 14:41:35,671] INFO [ControllerServer id=1] Waiting for controller quorum voters future (kafka.server.ControllerServer)
[2023-09-20 14:41:35,673] INFO [ControllerServer id=1] Finished waiting for controller quorum voters future (kafka.server.ControllerServer)
[2023-09-20 14:41:35,680] ERROR Encountered fatal fault: caught exception (org.apache.kafka.server.fault.ProcessTerminatingFaultHandler)
java.lang.NullPointerException
        at kafka.server.ControllerServer.startup(ControllerServer.scala:210)
        at kafka.server.KafkaRaftServer.$anonfun$startup$1(KafkaRaftServer.scala:95)
        at kafka.server.KafkaRaftServer.$anonfun$startup$1$adapted(KafkaRaftServer.scala:95)
        at scala.Option.foreach(Option.scala:437)
        at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:95)
        at kafka.Kafka$.main(Kafka.scala:113)
        at kafka.Kafka.main(Kafka.scala)

各种搜索也无能为力,只能搜到问题,搜不到答案

在Windows上就这样草草了之了......我准备放弃了

1.2 Kafka with ZooKeeper

1.2.1 Start the ZooKeeper service

%kafka_home%\bin\windows\zookeeper-server-start.bat ../../config/zookeeper.properties

1.2.2 Start the Kafka broker service

%kafka_home%\bin\windows\kafka-server-start.bat ../../config/server.properties

2 Windows集群(同一台物理机)

2.1 Kafka with KRaft

2.1.1 拷贝3份kafka应用

2.1.2 分别修改配置文件

kafka_cluster_node1\config\kraft\server.properties

process.roles=broker,controller

node.id=1

controller.quorum.voters=1@localhost:9093,2@localhost:9093,3@localhost:9093

listeners=PLAINTEXT://localhost:9092,CONTROLLER://localhost:9093

advertised.listeners=PLAINTEXT://localhost:9092

log.dirs=/kafka_cluster_node1/logs/tmp/kraft-combined-logs

kafka_cluster_node2\config\kraft\server.properties

process.roles=broker,controller

node.id=2

controller.quorum.voters=1@localhost:9093,2@localhost:9093,3@localhost:9093

listeners=PLAINTEXT://localhost:9092,CONTROLLER://localhost:9093

advertised.listeners=PLAINTEXT://localhost:9092

log.dirs=/kafka_cluster_node2/logs/tmp/kraft-combined-logs

kafka_cluster_node3\config\kraft\server.properties

process.roles=broker,controller

node.id=3

controller.quorum.voters=1@localhost:9093,2@localhost:9093,3@localhost:9093

listeners=PLAINTEXT://localhost:9092,CONTROLLER://localhost:9093

advertised.listeners=PLAINTEXT://localhost:9092

log.dirs=/kafka_cluster_node3/logs/tmp/kraft-combined-logs

2.1.3 Generate a Cluster UUID 

使用其中任意一个目录生成集群ID

kafka_cluster_node1\bin\windows\kafka-storage.bat random-uuid

2.1.4 Format Log Directories

分别初始化日志目录

kafka_cluster_node1\bin\kafka-storage.bat format -t IFxmE1eDTfSOX-U_ZV7ntw -c ../../config/kraft/server.properties

kafka_cluster_node2\bin\kafka-storage.bat format -t IFxmE1eDTfSOX-U_ZV7ntw -c ../../config/kraft/server.properties

kafka_cluster_node3\bin\kafka-storage.bat format -t IFxmE1eDTfSOX-U_ZV7ntw -c ../../config/kraft/server.properties

2.1.5 Start the Kafka Server

分别启动服务

kafka_cluster_node1\bin\kafka-server-start.bat ../../config/kraft/server.properties

kafka_cluster_node2\bin\kafka-server-start.bat ../../config/kraft/server.properties

kafka_cluster_node3\bin\kafka-server-start.bat ../../config/kraft/server.properties

依然报错!!! 

kraft到底支不支持Windows啊???还是kafka版本的问题???

放弃!!!

2.2 Kafka with ZooKeeper

需要1个zk节点,3个kafka节点。

2.2.1 同样需要3份kafka应用

2.2.2 分别修改配置文件

kafka_cluster_node1\config\server.properties

broker.id=1

zookeeper.connect=localhost:2181

listeners=PLAINTEXT://:9092

advertised.listeners=PLAINTEXT://kafkaNode1:9092

log.dirs=/kafka_cluster_node1/logs/tmp/kafka-logs

 kafka_cluster_node2\config\server.properties

broker.id=2

zookeeper.connect=localhost:2181

listeners=PLAINTEXT://:9093

advertised.listeners=PLAINTEXT://kafkaNode2:9092

log.dirs=/kafka_cluster_node2/logs/tmp/kafka-logs

kafka_cluster_node3\config\server.properties

broker.id=3

zookeeper.connect=localhost:2181

listeners=PLAINTEXT://:9094

advertised.listeners=PLAINTEXT://kafkaNode3:9092

log.dirs=/kafka_cluster_node3/logs/tmp/kafka-logs

2.2.3 Start the ZooKeeper service 

可以启动独立部署的zk,也可以用kafka自带的zk

%kafka_home%\bin\windows\zookeeper-server-start.bat ../../config/zookeeper.properties

2.2.4 Start the Kafka broker service

分别启动kafka节点

kafka_cluster_node1\bin\windows\kafka-server-start.bat ../../config/server.properties

kafka_cluster_node2\bin\windows\kafka-server-start.bat ../../config/server.properties

kafka_cluster_node3\bin\windows\kafka-server-start.bat ../../config/server.properties

集群部署成功!!!

但是应用发送消息尚有问题,留个hook在此。

3 Linux单机

3.1 Kafka with KRaft

操作与windows一致,一把就成功,没有任何问题。(果然!!!)

3.2 Kafka with ZooKeeper

还没有验证。

4 Windows集群(3台虚拟机)

4.1 Kafka with KRaft


常见异常

异常1:

这是因为单节点启动时产生的 tmp\kafka-logs\meta.properties文件中broker.id与现在集群的各节点不一致的原因(如果前面各节点输出的日志目录不同也不会产生这个问题)

#Thu Sep 21 16:56:50 CST 2023
cluster.id=bkuCLgUiRYe5CSnptF6ZVQ
version=0
broker.id=0

可以修改为一致或者直接删除。

异常2:

由于是在同一个机器上做集群,advertised.listeners配置需要区分,可以通过修改etc/hosts实现

localhost kafkaNode1
localhost kafkaNode2
localhost kafkaNode3

异常3:

还是因为在同一个主机上做集群的原因,listeners端口要区分开,可以分别设置为9092、9093、9094解决。 

异常4:

hosts文件中配置域名映射时要使用127.0.0.1,不能使用localhost。 

异常5:

应用端异常,说明spring.kafka.bootstrap-servers配置错误,应该配置advertised.listeners的地址。

异常6:

应用端发送消息时出现异常Error: NOT_LEADER_OR_FOLLOWER,

悬而未决

异常7:

linux集群环境

就是防火墙原因,三个节点9093端口不通,添加iptables规则,问题解决。


Error Codes

官方文档:Apache Kafka 

We use numeric codes to indicate what problem occurred on the server. These can be translated by the client into exceptions or whatever the appropriate error handling mechanism in the client language. Here is a table of the error codes currently in use:

ERRORCODERETRIABLEDESCRIPTION
UNKNOWN_SERVER_ERROR-1FalseThe server experienced an unexpected error when processing the request.
NONE0False
OFFSET_OUT_OF_RANGE1FalseThe requested offset is not within the range of offsets maintained by the server.
CORRUPT_MESSAGE2TrueThis message has failed its CRC checksum, exceeds the valid size, has a null key for a compacted topic, or is otherwise corrupt.
UNKNOWN_TOPIC_OR_PARTITION3TrueThis server does not host this topic-partition.
INVALID_FETCH_SIZE4FalseThe requested fetch size is invalid.
LEADER_NOT_AVAILABLE5TrueThere is no leader for this topic-partition as we are in the middle of a leadership election.
NOT_LEADER_OR_FOLLOWER6TrueFor requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition.
REQUEST_TIMED_OUT7TrueThe request timed out.
BROKER_NOT_AVAILABLE8FalseThe broker is not available.
REPLICA_NOT_AVAILABLE9TrueThe replica is not available for the requested topic-partition. Produce/Fetch requests and other requests intended only for the leader or follower return NOT_LEADER_OR_FOLLOWER if the broker is not a replica of the topic-partition.
MESSAGE_TOO_LARGE10FalseThe request included a message larger than the max message size the server will accept.
STALE_CONTROLLER_EPOCH11FalseThe controller moved to another broker.
OFFSET_METADATA_TOO_LARGE12FalseThe metadata field of the offset request was too large.
NETWORK_EXCEPTION13TrueThe server disconnected before a response was received.
COORDINATOR_LOAD_IN_PROGRESS14TrueThe coordinator is loading and hence can't process requests.
COORDINATOR_NOT_AVAILABLE15TrueThe coordinator is not available.
NOT_COORDINATOR16TrueThis is not the correct coordinator.
INVALID_TOPIC_EXCEPTION17FalseThe request attempted to perform an operation on an invalid topic.
RECORD_LIST_TOO_LARGE18FalseThe request included message batch larger than the configured segment size on the server.
NOT_ENOUGH_REPLICAS19TrueMessages are rejected since there are fewer in-sync replicas than required.
NOT_ENOUGH_REPLICAS_AFTER_APPEND20TrueMessages are written to the log, but to fewer in-sync replicas than required.
INVALID_REQUIRED_ACKS21FalseProduce request specified an invalid value for required acks.
ILLEGAL_GENERATION22FalseSpecified group generation id is not valid.
INCONSISTENT_GROUP_PROTOCOL23FalseThe group member's supported protocols are incompatible with those of existing members or first group member tried to join with empty protocol type or empty protocol list.
INVALID_GROUP_ID24FalseThe configured groupId is invalid.
UNKNOWN_MEMBER_ID25FalseThe coordinator is not aware of this member.
INVALID_SESSION_TIMEOUT26FalseThe session timeout is not within the range allowed by the broker (as configured by group.min.session.timeout.ms and group.max.session.timeout.ms).
REBALANCE_IN_PROGRESS27FalseThe group is rebalancing, so a rejoin is needed.
INVALID_COMMIT_OFFSET_SIZE28FalseThe committing offset data size is not valid.
TOPIC_AUTHORIZATION_FAILED29FalseTopic authorization failed.
GROUP_AUTHORIZATION_FAILED30FalseGroup authorization failed.
CLUSTER_AUTHORIZATION_FAILED31FalseCluster authorization failed.
INVALID_TIMESTAMP32FalseThe timestamp of the message is out of acceptable range.
UNSUPPORTED_SASL_MECHANISM33FalseThe broker does not support the requested SASL mechanism.
ILLEGAL_SASL_STATE34FalseRequest is not valid given the current SASL state.
UNSUPPORTED_VERSION35FalseThe version of API is not supported.
TOPIC_ALREADY_EXISTS36FalseTopic with this name already exists.
INVALID_PARTITIONS37FalseNumber of partitions is below 1.
INVALID_REPLICATION_FACTOR38FalseReplication factor is below 1 or larger than the number of available brokers.
INVALID_REPLICA_ASSIGNMENT39FalseReplica assignment is invalid.
INVALID_CONFIG40FalseConfiguration is invalid.
NOT_CONTROLLER41TrueThis is not the correct controller for this cluster.
INVALID_REQUEST42FalseThis most likely occurs because of a request being malformed by the client library or the message was sent to an incompatible broker. See the broker logs for more details.
UNSUPPORTED_FOR_MESSAGE_FORMAT43FalseThe message format version on the broker does not support the request.
POLICY_VIOLATION44FalseRequest parameters do not satisfy the configured policy.
OUT_OF_ORDER_SEQUENCE_NUMBER45FalseThe broker received an out of order sequence number.
DUPLICATE_SEQUENCE_NUMBER46FalseThe broker received a duplicate sequence number.
INVALID_PRODUCER_EPOCH47FalseProducer attempted to produce with an old epoch.
INVALID_TXN_STATE48FalseThe producer attempted a transactional operation in an invalid state.
INVALID_PRODUCER_ID_MAPPING49FalseThe producer attempted to use a producer id which is not currently assigned to its transactional id.
INVALID_TRANSACTION_TIMEOUT50FalseThe transaction timeout is larger than the maximum value allowed by the broker (as configured by transaction.max.timeout.ms).
CONCURRENT_TRANSACTIONS51TrueThe producer attempted to update a transaction while another concurrent operation on the same transaction was ongoing.
TRANSACTION_COORDINATOR_FENCED52FalseIndicates that the transaction coordinator sending a WriteTxnMarker is no longer the current coordinator for a given producer.
TRANSACTIONAL_ID_AUTHORIZATION_FAILED53FalseTransactional Id authorization failed.
SECURITY_DISABLED54FalseSecurity features are disabled.
OPERATION_NOT_ATTEMPTED55FalseThe broker did not attempt to execute this operation. This may happen for batched RPCs where some operations in the batch failed, causing the broker to respond without trying the rest.
KAFKA_STORAGE_ERROR56TrueDisk error when trying to access log file on the disk.
LOG_DIR_NOT_FOUND57FalseThe user-specified log directory is not found in the broker config.
SASL_AUTHENTICATION_FAILED58FalseSASL Authentication failed.
UNKNOWN_PRODUCER_ID59FalseThis exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception.
REASSIGNMENT_IN_PROGRESS60FalseA partition reassignment is in progress.
DELEGATION_TOKEN_AUTH_DISABLED61FalseDelegation Token feature is not enabled.
DELEGATION_TOKEN_NOT_FOUND62FalseDelegation Token is not found on server.
DELEGATION_TOKEN_OWNER_MISMATCH63FalseSpecified Principal is not valid Owner/Renewer.
DELEGATION_TOKEN_REQUEST_NOT_ALLOWED64FalseDelegation Token requests are not allowed on PLAINTEXT/1-way SSL channels and on delegation token authenticated channels.
DELEGATION_TOKEN_AUTHORIZATION_FAILED65FalseDelegation Token authorization failed.
DELEGATION_TOKEN_EXPIRED66FalseDelegation Token is expired.
INVALID_PRINCIPAL_TYPE67FalseSupplied principalType is not supported.
NON_EMPTY_GROUP68FalseThe group is not empty.
GROUP_ID_NOT_FOUND69FalseThe group id does not exist.
FETCH_SESSION_ID_NOT_FOUND70TrueThe fetch session ID was not found.
INVALID_FETCH_SESSION_EPOCH71TrueThe fetch session epoch is invalid.
LISTENER_NOT_FOUND72TrueThere is no listener on the leader broker that matches the listener on which metadata request was processed.
TOPIC_DELETION_DISABLED73FalseTopic deletion is disabled.
FENCED_LEADER_EPOCH74TrueThe leader epoch in the request is older than the epoch on the broker.
UNKNOWN_LEADER_EPOCH75TrueThe leader epoch in the request is newer than the epoch on the broker.
UNSUPPORTED_COMPRESSION_TYPE76FalseThe requesting client does not support the compression type of given partition.
STALE_BROKER_EPOCH77FalseBroker epoch has changed.
OFFSET_NOT_AVAILABLE78TrueThe leader high watermark has not caught up from a recent leader election so the offsets cannot be guaranteed to be monotonically increasing.
MEMBER_ID_REQUIRED79FalseThe group member needs to have a valid member id before actually entering a consumer group.
PREFERRED_LEADER_NOT_AVAILABLE80TrueThe preferred leader was not available.
GROUP_MAX_SIZE_REACHED81FalseThe consumer group has reached its max size.
FENCED_INSTANCE_ID82FalseThe broker rejected this static consumer since another consumer with the same group.instance.id has registered with a different member.id.
ELIGIBLE_LEADERS_NOT_AVAILABLE83TrueEligible topic partition leaders are not available.
ELECTION_NOT_NEEDED84TrueLeader election not needed for topic partition.
NO_REASSIGNMENT_IN_PROGRESS85FalseNo partition reassignment is in progress.
GROUP_SUBSCRIBED_TO_TOPIC86FalseDeleting offsets of a topic is forbidden while the consumer group is actively subscribed to it.
INVALID_RECORD87FalseThis record has failed the validation on broker and hence will be rejected.
UNSTABLE_OFFSET_COMMIT88TrueThere are unstable offsets that need to be cleared.
THROTTLING_QUOTA_EXCEEDED89TrueThe throttling quota has been exceeded.
PRODUCER_FENCED90FalseThere is a newer producer with the same transactionalId which fences the current one.
RESOURCE_NOT_FOUND91FalseA request illegally referred to a resource that does not exist.
DUPLICATE_RESOURCE92FalseA request illegally referred to the same resource twice.
UNACCEPTABLE_CREDENTIAL93FalseRequested credential would not meet criteria for acceptability.
INCONSISTENT_VOTER_SET94FalseIndicates that the either the sender or recipient of a voter-only request is not one of the expected voters
INVALID_UPDATE_VERSION95FalseThe given update version was invalid.
FEATURE_UPDATE_FAILED96FalseUnable to update finalized features due to an unexpected server error.
PRINCIPAL_DESERIALIZATION_FAILURE97FalseRequest principal deserialization failed during forwarding. This indicates an internal error on the broker cluster security setup.
SNAPSHOT_NOT_FOUND98FalseRequested snapshot was not found
POSITION_OUT_OF_RANGE99FalseRequested position is not greater than or equal to zero, and less than the size of the snapshot.
UNKNOWN_TOPIC_ID100TrueThis server does not host this topic ID.
DUPLICATE_BROKER_REGISTRATION101FalseThis broker ID is already in use.
BROKER_ID_NOT_REGISTERED102FalseThe given broker ID was not registered.
INCONSISTENT_TOPIC_ID103TrueThe log's topic ID did not match the topic ID in the request
INCONSISTENT_CLUSTER_ID104FalseThe clusterId in the request does not match that found on the server
TRANSACTIONAL_ID_NOT_FOUND105FalseThe transactionalId could not be found
FETCH_SESSION_TOPIC_ID_ERROR106TrueThe fetch session encountered inconsistent topic ID usage
INELIGIBLE_REPLICA107FalseThe new ISR contains at least one ineligible replica.
NEW_LEADER_ELECTED108FalseThe AlterPartition request successfully updated the partition state but the leader has changed.
OFFSET_MOVED_TO_TIERED_STORAGE109FalseThe requested offset is moved to tiered storage.
FENCED_MEMBER_EPOCH110FalseThe member epoch is fenced by the group coordinator. The member must abandon all its partitions and rejoin.
UNRELEASED_INSTANCE_ID111FalseThe instance ID is still used by another member in the consumer group. That member must leave first.
UNSUPPORTED_ASSIGNOR112FalseThe assignor or its version range is not supported by the consumer group.
更多推荐

嵌入式:驱动开发 Day7

作业:基于GPIO子系统,编写LED的驱动程序和应用程序驱动程序:myled.c#include<linux/init.h>#include<linux/module.h>#include<linux/cdev.h>#include<linux/fs.h>#include<linux/device.h>#include

嵌入式Linux驱动开发(I2C专题)(五)

I2C系统驱动程序模型参考资料:Linux内核文档:Documentation\i2c\instantiating-devices.rstDocumentation\i2c\writing-clients.rstLinux内核驱动程序示例:drivers/eeprom/at24.c1.I2C驱动程序的层次I2CCore

北斗GPS网络时钟系统(子母钟系统)助力智慧教室建设

北斗GPS网络时钟系统(子母钟系统)助力智慧教室建设北斗GPS网络时钟系统(子母钟系统)助力智慧教室建设HR系列型NTP网络时钟系统是由我公司精心设计、自行研发生产的一套通过网口与母钟连接的子钟,接收母钟发送来的时间信息(信息内容:年、月、日、时、分、秒),将这些时间信息准确无误的显示出来。子钟带后备电池,停电时不显示

下拉框组件的封装(element ui )

目录实现思路创建通用的下拉选择框组件如何使用这个组件结语当你使用Vue.js构建Web应用时,封装可复用组件是提高开发效率和代码可维护性的关键之一。在这篇文章中,我们将探讨如何使用Vue.js来创建一个通用的下拉选择框组件,以及如何将它封装成一个可配置的组件。实现思路一级联动先从饿了么拿下拉列表数据处理提示名处理设定默

算法(三)

哈希表算法章节(1)Ascall码文章推荐给定两个字符串s和t,编写一个函数来判断t是否是s的字母异位词。注意:若s和t中每个字符出现的次数都相同,则称s和t互为字母异位词。classSolution{publicbooleanisAnagram(Strings,Stringt){//先说明一下字母异位词的定义://两

IBM存储设备

因工作的原因,本人以前在国内某大型金融机构工作,机器全是采购的IBM小型机,有X系列,有P系列。它们有一些特性,我总结了一下,分享出来,供大家选型参考。1.RAID控制器双活动型热插拔控制器,硬件XOR引擎2.缓存每个控制器2GB缓存;具备写缓存断电保护,系统外部断电后,写缓存数据可永久保留到闪存盘上3.主机接口提供≥

Zookeeper集群 + Kafka集群

kafka不能离开,需要通过zookeeper来管理定义Zookeeper是一个开源的分布式服务管理框架,存储业务服务节点元数据及状态信息,并把在Zookeeper上注册的服务器节点的状态信息通知给客户端(Zookeeper=文件系统+通知机制)工作机制☆☆☆☆☆从设计模式角度来理解:是一个基于观察者模式设计的分布式服

7年阿里测试经验之谈 —— 用UI自动化测试实现元素定位

随着IT行业的发展,产品愈渐复杂,web端业务及流程更加繁琐,目前UI测试仅是针对单一页面,操作量大。为了满足多页面功能及流程的需求及节省工时,设计了这款UI自动化测试程序。旨在提供接口,集成到蜗牛自动化测试框架,方便用例的设计。目前,在自动化测试的实际应用中,接口自动化测试被广泛使用,但UI自动化测试也并不会被替代。

Javascript中怎么使用map?

在JavaScript中,map()是一个用于数组的高阶函数,用于遍历数组中的每个元素并对每个元素执行一个指定的操作,然后将操作的结果存储在一个新数组中。以下是如何使用map()方法的基本语法:constnewArray=array.map(callback(currentValue[,index[,array]])[

什么是短路表达式?

短路表达式是一种在编程中常用的逻辑运算方式。它利用了逻辑运算符的特性,当满足某个条件时,就会停止继续执行后面的逻辑判断。在大多数编程语言中,短路表达式主要使用逻辑与(&&)和逻辑或(||)两个运算符。短路表达式的作用有以下几点:提高效率:当使用逻辑与(&&)运算符时,如果第一个操作数为假(false),则整个表达式必定

Haproxy负载均衡群集

HAproxy搭建Web群集一、Web集群调度器1、常见的Web集群调度器2、常用集群调度器的优缺点(LVS,Nginx,Haproxy)2.1Nginx2.2LVS2.3Haproxy3、LVS、Nginx、HAproxy的区别二、Haproxy1、简介2、Haproxy应用分析3、HAProxy的主要特性4、Hap

热文推荐