Go to official Apache Spark download webpage and select Spark release version 3.1.1 and package type Source Code. Unzip the downloaded package and build Spark by Apache Maven.
1
./build/mvn -DskipTests clean package
Example Programs
SparkPi
SparkPi is a compute-intensive task which estimates pi by “throwing darts” at a circle. Random points in the unit square ((0, 0) to (1,1)) are picked and we can see how many fall in the unit circle. The fraction should be pi/4, so we use this to get the estimate.
1 2
# Usage: pi [partitions] ./bin/run-example SparkPi 10/1000/10000
For 10 partitions, the pi estimation is roughly 3.1392951, and the running time is 0.496385s. For 100 partitions, the pi estimation is roughly 3.1413819, and the running time is 3.576282s. For 1000 partitions, the pi estimation is roughly 3.1416965, and the running time is 43.483414s. For 10000 partitions, the pi estimation is roughly 3.1415915, and the running time is 363.989005s.
WordCount
Dataset
Gutenberg
To get big quantities of text without repetition, I crawl text files from Project Gutenberg.
1 2 3 4 5 6 7 8 9
# Create directory mkdir Download/temp/ cd Download/temp/ # Crawl text files wget -bqc -w 2 -m -H 'http://www.gutenberg.org/robot/harvest?filetypes[]=txt&langs[]=en' # Extract text data mkdir extracted/ find . -name '*.zip' -exec sh -c 'unzip -d extracted {}' ';' cat extracted/*.txt > temp.txt
After crawling text data, remove special characters from the text file and fix encoding.
Download Yelp dataset. I will focus on the review dataset which contains big quantities of text without repetition and can work as a benchmark. Before I run the example program on the dataset, I first need to convert the JSON file into a CSV file. Since the dataset is too large, I will process the first 1000000 reviews which is almost 6GB.
I will use the Yelp dataset as a benchmark to see the impact configuration. However, since the Yelp dataset I use in the previous section is too large (>5GB), the sort program will always report buffer overflow. So, I will only run the first 500MB dataset (xaa).
1
split -b 500m yelp.txt
Configuration File
The configuration file of Spark is provided as a template spark-defaults.conf.template. To modify the configuration, first copy the template and change the name to spark-defaults.conf, then I can change the settings by adding properties in this file.
Driver Memory
The first change in the configuration is spark.driver.memory and spark.driver.maxResultSize. The spark.driver.maxResultSize is set to 10G which is larger than the size of the dataset to ensure the serialization can complete. The driver memory is the amount of memory to use for the driver process. I have tested four values for the driver memory: 1G, 2G, 4G and 8G. The program can run by two commands.
1 2 3 4
# set config directly incommand line time ./bin/spark-submit --driver-memory 1g examples/src/main/python/sort.py ../xaa # set config in config file time ./bin/spark-submit --properties-file conf/spark-defaults.conf examples/src/main/python/sort.py ../xaa
1G
The Spark default value of driver memory is 1G. However, it’s too small for the testing dataset and the sort program will report Java heap space out of memory.
2G
After raising the driver memory to 2G, the sort program will still report Java heap space out of memory.
4G
The sort program will run successfully when the driver memory is raised to 4G. The program can correctly print out the sorted words and the total running time is 3m47.166s (including the printing time).
8G
The program can correctly print out the sorted words and the total running time is 3m47.200s (including the printing time), which is similar to 4G.
Reducer Size
The second property to modify is the spark.reducer.maxSizeInFlight. It is the maximum size of map outputs to fetch simultaneously from each reduce task. It represents a fixed memory overhead per reduce task. I would like to see if the fetch size will impact the performance. I will try three sizes: 12M, 48M, and 96M. The spark.reducer.maxSizeInFlight will be added to the property file and the program will run by the following command.
1
time ./bin/spark-submit --properties-file conf/spark-defaults.conf examples/src/main/python/sort.py ../xaa
12M
The running time with 12M fetching size is 3m58.065s including the result printing time.
48M
The Spark default reducer max size in flight is 48M. The running time with 12M fetching size is 4m0.725s including the result printing time.
96M
The running time with 12M fetching size is 3m56.303s including the result printing time.
Shuffle Buffer
The third property to change is the spark.shuffle.file.buffer. It is the size of the in-memory buffer for each shuffle file output stream. The buffers can reduce the number of disk seeks and system calls made in creating intermediate shuffle files so that larger buffers can theoretically result in better performance. I will try three different buffer sizes: 8K, 32K, and 1M. The following is the running command.
1
time ./bin/spark-submit --properties-file conf/spark-defaults.conf examples/src/main/python/sort.py ../xaa
8K
The running time for 8K shuffle buffer is 3m55.807s including result printing time.
32K
32K is the Spark default value for shuffle file buffer. The running time for 8K shuffle buffer is 3m59.308s including result printing time.
1M
The running time for 8K shuffle buffer is 3m59.725s including result printing time.
Serializer
The fourth property to change is the spark.serializer. It is the class to use for serializing objects that will be sent over the network or need to be cached in serialized form. It is said that the default serializer JavaSerializer doesn’t perform well enough. I will try two different buffer sizes: default one and KryoSerializer. The following is the running command.
1
time ./bin/spark-submit --properties-file conf/spark-defaults.conf examples/src/main/python/sort.py ../xaa
JavaSerializer
The running time without printing out results for default serializer is 6.6012s.
KryoSerializer
The running time without printing out results for default serializer is 6.5324s. In order to run the program without buffer overflow, I set the spark.kryoserializer.buffer.max to 2047M.
Thread Number
Since I am running Spark on local mode, I can’t control how many cores the program uses to run the benchmark, but I can set how many threads the local program is running on within the limit of the number of logical cores of the device. I have tried five different thread number: 1, 2, 4, 8 and 12. The default thread number is 2. The following is the command running the program.
1
time ./bin/spark-submit --master local[1] --properties-file conf/spark-defaults.conf examples/src/main/python/sort.py ../xaa
Hyperledger Fabric is an open source enterprise-grade permissioned distributed ledger technology (DLT) platform, designed for use in enterprise contexts. It is the first distributed ledger platform to support smart contracts authored in general-purpose programming languages such as Java, Go and Node.js, rather than constrained domain-specific languages (DSL).
Installation
Prerequisite
Update
1 2
sudo apt update sudo apt upgrade
Git
1
sudo apt-get install git
cURL
1
sudo apt-get install curl
Docker
1
sudo apt-get -y install docker-compose
Confirm installation and version.
1 2 3 4
zhiqich@ubuntu:~$ docker --version Docker version 19.03.8, build afacb8b7f0 zhiqich@ubuntu:~$ docker-compose --version docker-compose version 1.25.0, build unknown
Usage: network.sh <Mode> [Flags] Modes: up - Bring up Fabric orderer and peer nodes. No channel is created up createChannel - Bring up fabric network with one channel createChannel - Create and join a channel after the network is created deployCC - Deploy a chaincode to a channel (defaults to asset-transfer-basic) down - Bring down the network
Flags: Used with network.sh up, network.sh createChannel: -ca <use CAs> - Use Certificate Authorities to generate network crypto material -c <channel name> - Name of channel to create (defaults to "mychannel") -s <dbtype> - Peer state database to deploy: goleveldb (default) or couchdb -r <max retry> - CLI times out after certain number of attempts (defaults to 5) -d <delay> - CLI delays for a certain number of seconds (defaults to 3) -verbose - Verbose mode
Used with network.sh deployCC -c <channel name> - Name of channel to deploy chaincode to -ccn <name> - Chaincode name. -ccl <language> - Programming language of the chaincode to deploy: go, java, javascript, typescript -ccv <version> - Chaincode version. 1.0 (default), v2, version3.x, etc -ccs <sequence> - Chaincode definition sequence. Must be an integer, 1 (default), 2, 3, etc -ccp <path> - File path to the chaincode. -ccep <policy> - (Optional) Chaincode endorsement policy using signature policy syntax. The default policy requires an endorsement from Org1 and Org2 -cccg <collection-config> - (Optional) File path to private data collections configuration file -cci <fcn name> - (Optional) Name of chaincode initialization function. When a function is provided, the execution of init will be requested and the function will be invoked.
-h - Print this message
Possible Mode and flag combinations up -ca -r -d -s -verbose up createChannel -ca -c -r -d -s -verbose createChannel -c -r -d -verbose deployCC -ccn -ccl -ccv -ccs -ccp -cci -r -d -verbose
Turn down and remove any container or artifact from previous runs. Then bring up new network.
1 2
./network.sh down ./network.sh up
Bring up network with certificate authorities by flag -ca.
1
./network.sh up -ca
Check running Docker containers and components of test network.
1
docker ps -a
Create Channel
Create channel with default name of mychannel.
1
./network.sh createChannel
Create channel with custom name by flag -c.
1
./network.sh createChannel -c channel
Start Chaincode
Start a chain code on channel through preferred SDK (Java, Go, Javascript).
1
./network.sh deployCC -ccn basic -ccp ../asset-transfer-basic/chaincode-go -ccl go
Fabric Application
Introduction
The tutorial provides an introduction to how Fabric applications interact with deployed blockchain networks. It uses sample programs built using Fabric SDK to invoke a smart contract which queries and updates the ledger with smart contract API. It also uses sample programs and a deployed Certificate Authority to generate X.509 certificates that an application needs to interact with a permissioned blockchain.
Asset Transfer
Asset Transfer basic sample demonstrates how to initialize a ledger with assets, query assets, create new assets, update assets and transfer an asset to a new owner. It has two components: application and smart contract. The application makes calls to the blockchain network to invoke transactions implemented in the chaincode (smart contract). The smart contract implements the transactions that involve interactions with the ledger.
Set Up Blockchain Network
Launch network by using network.sh script. Bring down current running network and bring up a new one with Certificate Authorities.
1 2 3
cd fabric-samples/test-network ./network.sh down ./network.sh up createChannel -c mychannel -ca
The script will deploy the Fabric test network with two peers, an ordering service and three certificate authorities.
Creating channel 'mychannel'. If network is not up, starting nodes with CLI timeout of '5' tries and CLI delay of '3' seconds and using database 'leveldb with crypto from 'Certificate Authorities' Bringing up network LOCAL_VERSION=2.3.1 DOCKER_IMAGE_VERSION=2.3.1 CA_LOCAL_VERSION=1.4.9 CA_DOCKER_IMAGE_VERSION=1.4.9 Generating certificates using Fabric CA Creating network "fabric_test" with the default driver Creating ca_org2 ... done Creating ca_orderer ... done Creating ca_org1 ... done ... Generating CCP files for Org1 and Org2 Creating volume "docker_orderer.example.com" with default driver Creating volume "docker_peer0.org1.example.com" with default driver Creating volume "docker_peer0.org2.example.com" with default driver WARNING: Found orphan containers (ca_org2, ca_orderer, ca_org1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Creating orderer.example.com ... done Creating peer0.org2.example.com ... done Creating peer0.org1.example.com ... done Creating cli ... done CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cb268c151387 hyperledger/fabric-tools:latest "/bin/bash" Less than a second ago Up Less than a second cli 82354eb9645d hyperledger/fabric-peer:latest "peer node start" 2 seconds ago Up Less than a second 0.0.0.0:7051->7051/tcp peer0.org1.example.com 96303fd44a6b hyperledger/fabric-peer:latest "peer node start" 2 seconds ago Up Less than a second 7051/tcp, 0.0.0.0:9051->9051/tcp peer0.org2.example.com 9c37906e598b hyperledger/fabric-orderer:latest "orderer" 2 seconds ago Up Less than a second 0.0.0.0:7050->7050/tcp, 0.0.0.0:7053->7053/tcp orderer.example.com 4179c79f4d47 hyperledger/fabric-ca:latest "sh -c 'fabric-ca-se…" 6 seconds ago Up 5 seconds 0.0.0.0:7054->7054/tcp ca_org1 33810360a9c3 hyperledger/fabric-ca:latest "sh -c 'fabric-ca-se…" 6 seconds ago Up 5 seconds 7054/tcp, 0.0.0.0:9054->9054/tcp ca_orderer fdf33f12e860 hyperledger/fabric-ca:latest "sh -c 'fabric-ca-se…" 6 seconds ago Up 5 seconds 7054/tcp, 0.0.0.0:8054->8054/tcp ca_org2 ...
Deploy the chaincode with the chaincode name and language options.
Loaded the network configuration located at /home/zhiqich/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/connection-org1.json Built a CA Client named ca-org1 Built a file system wallet at /home/zhiqich/fabric-samples/asset-transfer-basic/application-javascript/wallet Successfully enrolled admin user and imported it into the wallet Successfully registered and enrolled user appUser and imported it into the wallet --> Submit Transaction: InitLedger, function creates the initial set of assets on the ledger *** Result: committed --> Evaluate Transaction: GetAllAssets, function returns all the current assets on the ledger *** Result: [ { "Key": "asset1", "Record": { "ID": "asset1", "Color": "blue", "Size": 5, "Owner": "Tomoko", "AppraisedValue": 300, "docType": "asset" } }, { "Key": "asset2", "Record": { "ID": "asset2", "Color": "red", "Size": 5, "Owner": "Brad", "AppraisedValue": 400, "docType": "asset" } }, { "Key": "asset3", "Record": { "ID": "asset3", "Color": "green", "Size": 10, "Owner": "Jin Soo", "AppraisedValue": 500, "docType": "asset" } }, { "Key": "asset4", "Record": { "ID": "asset4", "Color": "yellow", "Size": 10, "Owner": "Max", "AppraisedValue": 600, "docType": "asset" } }, { "Key": "asset5", "Record": { "ID": "asset5", "Color": "black", "Size": 15, "Owner": "Adriana", "AppraisedValue": 700, "docType": "asset" } }, { "Key": "asset6", "Record": { "ID": "asset6", "Color": "white", "Size": 15, "Owner": "Michel", "AppraisedValue": 800, "docType": "asset" } } ] --> Submit Transaction: CreateAsset, creates new asset with ID, color, owner, size, and appraisedValue arguments *** Result: committed *** Result: { "ID": "asset13", "Color": "yellow", "Size": "5", "Owner": "Tom", "AppraisedValue": "1300" } --> Evaluate Transaction: ReadAsset, function returns an asset with a given assetID *** Result: { "ID": "asset13", "Color": "yellow", "Size": "5", "Owner": "Tom", "AppraisedValue": "1300" } --> Evaluate Transaction: AssetExists, function returns "true"if an asset with given assetID exist *** Result: true --> Submit Transaction: UpdateAsset asset1, change the appraisedValue to 350 *** Result: committed --> Evaluate Transaction: ReadAsset, function returns "asset1" attributes *** Result: { "ID": "asset1", "Color": "blue", "Size": "5", "Owner": "Tomoko", "AppraisedValue": "350" } --> Submit Transaction: UpdateAsset asset70, asset70 does not exist and should return an error 2021-03-05T17:06:59.733Z - error: [Transaction]: Error: No valid responses from any peers. Errors: peer=peer0.org2.example.com:9051, status=500, message=error in simulation: transaction returned with failure: Error: The asset asset70 does not exist peer=peer0.org1.example.com:7051, status=500, message=error in simulation: transaction returned with failure: Error: The asset asset70 does not exist *** Successfully caught the error: Error: No valid responses from any peers. Errors: peer=peer0.org2.example.com:9051, status=500, message=error in simulation: transaction returned with failure: Error: The asset asset70 does not exist peer=peer0.org1.example.com:7051, status=500, message=error in simulation: transaction returned with failure: Error: The asset asset70 does not exist --> Submit Transaction: TransferAsset asset1, transfer to new owner of Tom *** Result: committed --> Evaluate Transaction: ReadAsset, function returns "asset1" attributes *** Result: { "ID": "asset1", "Color": "blue", "Size": "5", "Owner": "Tom", "AppraisedValue": "350" }
Enroll Admin User
Admin registration is bootstrapped when the Certificate Authority is started, and it is executed right after the profile and wallet paths are specified.
1 2 3 4 5 6 7 8 9 10 11 12
// build an in memory object with the network configuration (also known as a connection profile) const ccp = buildCCPOrg1();
// build an instance of the fabric ca services client based on // the information in the network configuration const caClient = buildCAClient(FabricCAServices, ccp, 'ca.org1.example.com');
// setup the wallet to hold the credentials of the application user const wallet = awaitbuildWallet(Wallets, walletPath);
// in a real application this would be done on an administrative flow, and only once awaitenrollAdmin(caClient, wallet, mspOrg1);
After successfully enrolling admin, the log will print the following.
1 2 3
Built a CA Client named ca-org1 Built a file system wallet at /home/zhiqich/fabric-samples/asset-transfer-basic/application-javascript/wallet Successfully enrolled admin user and imported it into the wallet
Register and Enroll Application User
Application users are registered and enrolled by the admin user, and they will be used to interact with the blockchain network.
1 2 3
// in a real application this would be done only when a new user was required to be added // and would be part of an administrative flow awaitregisterAndEnrollUser(caClient, wallet, mspOrg1, org1UserId, 'org1.department1');
After successfully enrolling user, the log will print the following.
1
Successfully registered and enrolled user appUser and imported it into the wallet
Prepare Connection to Channel and Smart Contract
Use contract name and channel name to get reference to the Contract via Gateway. getContract() API is provided.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
// setup the gateway instance // The user will now be able to create connections to the fabric network and be able to // submit transactions and query. All transactions submitted by this gateway will be // signed by this user using the credentials stored in the wallet. await gateway.connect(ccp, { wallet, identity: org1UserId, discovery: { enabled: true, asLocalhost: true } // using asLocalhost as this gateway is using a fabric network deployed locally });
// Build a network instance based on the channel where the smart contract is deployed const network = await gateway.getNetwork(channelName);
// Get the contract from the network. const contract = network.getContract(chaincodeName);
Initialize Ledger
The submitTransaction() function is used to invoke the chaincode InitLedger() function to populate the ledger with some sample data.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
// Initialize a set of asset data on the channel using the chaincode 'InitLedger' function. // This type of transaction would only be run once by an application the first time it was started after it // deployed the first time. Any updates to the chaincode deployed later would likely not need to run // an "init" type function. console.log('\n--> Submit Transaction: InitLedger, function creates the initial set of assets on the ledger'); await contract.submitTransaction('InitLedger'); console.log('*** Result: committed'); ... asyncInitLedger(ctx) { const assets = [ { ID: 'asset1', Color: 'blue', Size: 5, Owner: 'Tomoko', AppraisedValue: 300, }, ...
The log after successful initialization will be the following.
1 2
--> Submit Transaction: InitLedger, function creates the initial set of assets on the ledger *** Result: committed
Invoke Chaincode Function
Each peer in a blockchain network hosts a copy of the ledger. An application program can view the most recent data from the ledger using read-only invocations of a smart contract (query). Applications can query data (key-value pairs) for a single or multiple keys. JSON query is also supported. The evaluateTransaction function is used to call GetAllAssets() function which returns all assets found in the world state.
// Let's try a query type operation (function). // This will be sent to just one peer and the results will be shown. console.log('\n--> Evaluate Transaction: GetAllAssets, function returns all the current assets on the ledger'); let result = await contract.evaluateTransaction('GetAllAssets'); console.log(`*** Result: ${prettyJSONString(result.toString())}`); ... // GetAllAssets returns all assets found in the world state. asyncGetAllAssets(ctx) { const allResults = []; // range query with empty string for startKey and endKey does an open-ended query of all assets in the chaincode namespace. const iterator = await ctx.stub.getStateByRange('', ''); let result = await iterator.next(); while (!result.done) { const strValue = Buffer.from(result.value.value.toString()).toString('utf8'); let record; try { record = JSON.parse(strValue); } catch (err) { console.log(err); record = strValue; } allResults.push({ Key: result.value.key, Record: record }); result = await iterator.next(); } returnJSON.stringify(allResults); }
The results should be the following.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
--> Evaluate Transaction: GetAllAssets, function returns all the current assets on the ledger *** Result: [ { "Key": "asset1", "Record": { "ID": "asset1", "Color": "blue", "Size": 5, "Owner": "Tomoko", "AppraisedValue": 300, "docType": "asset" } }, ...
The function submitTransaction() can also be used to call function CreateAsset() to add a new asset to the world state.
// CreateAsset issues a new asset to the world state with given details. asyncCreateAsset(ctx, id, color, size, owner, appraisedValue) { const asset = { ID: id, Color: color, Size: size, Owner: owner, AppraisedValue: appraisedValue, }; return ctx.stub.putState(id, Buffer.from(JSON.stringify(asset))); } ... --> SubmitTransaction: CreateAsset, creates new asset withID, color, owner, size, and appraisedValue arguments *** Result: committed *** Result: { "ID": "asset13", "Color": "yellow", "Size": "5", "Owner": "Tom", "AppraisedValue": "1300" }
The chaincode also provides AssetExists(), ReadAsset() functions called by evaluateTransaction() function and UpdateAsset(), TransferAsset() functions called by submitTransaction() function to check whether a specific asset exists, get specific asset data, update a specific asset, and transfer a specific asset to a new owner.
// AssetExists returns true when asset with given ID exists in world state. asyncAssetExists(ctx, id) { const assetJSON = await ctx.stub.getState(id); return assetJSON && assetJSON.length > 0; } // ReadAsset returns the asset stored in the world state with given id. asyncReadAsset(ctx, id) { const assetJSON = await ctx.stub.getState(id); // get the asset from chaincode state if (!assetJSON || assetJSON.length === 0) { thrownewError(`The asset ${id} does not exist`); } return assetJSON.toString(); } // UpdateAsset updates an existing asset in the world state with provided parameters. asyncUpdateAsset(ctx, id, color, size, owner, appraisedValue) { const exists = awaitthis.AssetExists(ctx, id); if (!exists) { thrownewError(`The asset ${id} does not exist`); } // overwriting original asset with new asset const updatedAsset = { ID: id, Color: color, Size: size, Owner: owner, AppraisedValue: appraisedValue, }; return ctx.stub.putState(id, Buffer.from(JSON.stringify(updatedAsset))); } // TransferAsset updates the owner field of asset with given id in the world state. asyncTransferAsset(ctx, id, newOwner) { const assetString = awaitthis.ReadAsset(ctx, id); const asset = JSON.parse(assetString); asset.Owner = newOwner; return ctx.stub.putState(id, Buffer.from(JSON.stringify(asset))); }
Install sudo apt install build-essential if there’s Wi-Fi or ethernet connection. If there’s no internet access available on the device, try downloading Ubuntu deb packages of build-essential Download and installing with command sudo dpkg -i package_name.deb.
OpenDHT is a lightweight C++14 Distributed Hash Table implementation. It provides an easy to use distributed in-memory data store. Every node in the network can read and write values to the store. Values are distributed over the network, with redundancy.
Build
Tool
Operating system
1
Linux ubuntu 5.8.0-43-generic #49~20.04.1-Ubuntu SMP Fri Feb 5 09:57:56 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
GCC
1 2 3 4
gcc (Ubuntu 10.2.0-5ubuntu1~20.04) 10.2.0 Copyright (C) 2020 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Clang
1 2 3 4
clang version 11.0.0 Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /usr/local/bin
CMake
1 2 3
cmake version 3.19.4
CMake suite maintained and supported by Kitware (kitware.com/cmake).
-b allows to specify a bootstrap node address (can be any running node of the DHT network)
-p allows to specify the local UDP port to bind (optional). If not set, will use any available port. Note that the default OpenDHT port (to ideally use for public nodes) is 4222
-D enables the multicast automatic local peer discovery mechanism
OpenDHT command line interface (CLI) Possible commands: h, help Print this help message. x, quit Quit the program. log Start/stop printing DHT logs.
Node information: ll Print basic information and stats about the current node. ls [key] Print basic information about current search(es). ld [key] Print basic information about currently stored values on this node (or key). lr Print the full current routing table of this node.
Operations on the DHT: b <ip:port> Ping potential node at given IP address/port. g <key> Get values at <key>. l <key> Listen for value changes at <key>. cl <key> <token> Cancel listen for <token> and <key>. p <key> <str> Put string value at <key>. pp <key> <str> Put string value at <key> (persistent version). cpp <key> <id> Cancel persistent put operation for <key> and value <id>. s <key> <str> Put string value at <key>, signed with our generated private key. e <key> <dest> <str> Put string value at <key>, encrypted for <dest> with its public key (if found). cc Trigger connectivity changed signal.
Indexation operations on the DHT: il <name> <key> [exact match] Lookup the index named <name> with the key <key>. Set [exact match] to 'false' for inexact match lookup. ii <name> <key> <value> Inserts the value <value> under the key <key> in the index named <name>.
Demo
Create node
1 2 3
zhiqich@ubuntu:~/Documents/opendht$ dhtnode -D OpenDHT node db159e7c839b5872692b244deeb849eb8195233b running on port 39901 (type 'h' or 'help' for a list of possible commands)
Check stats
1 2 3 4 5 6 7 8 9 10
>> ll >> OpenDHT node db159e7c839b5872692b244deeb849eb8195233b running on port 39901 1 ongoing operations IPv4 stats: Known nodes: 0 good, 0 dubious, 0 incoming. 0 searches, 0 total cached nodes
IPv6 stats: Known nodes: 0 good, 0 dubious, 0 incoming. 0 searches, 0 total cached nodes
Put value
1 2 3
>> p test"this is a test string" Using h(test) = a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 >> Put: success, took 1.36 ms. Value ID: 8530bb7894ee21b1
Get value
1 2 3 4 5
>> g test Using h(test) = a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 >> Get: found 1 value(s) after 178 us Value[id:8530bb7894ee21b1 data(text/plain):""this"] Get: completed, took 1.4 ms
Dhtchat
Dhtchat is a simple IM client working over the dht.
dhtchat, a simple OpenDHT command line chat client. Report bugs to: https://opendht.net
Flags are similar to those of Dhtnode
-D enables the multicast automatic local peer discovery mechanism
Command Line Interface
c {channel} to join a channel
d to disconnect the channel
e {target} {message} to send an encrypted message to a specific user with public key ID
Demo
Create chat node
1 2 3 4
zhiqich@ubuntu:~/Documents/opendht$ dhtchat -D OpenDHT node 1f04c73f6c5af467cd68ba4058a065bc0731ac19 running on port 59537 Public key ID ecd52bfcb037b225e9021d5149a84a17eb881f4b type 'c {hash}' to join a channel
Join channel
1 2
> c test Joining h(test) = a94a8fe5ccb19ba61c4c0873d391e987982fbbd3
Chat
1 2
>> hi >> d108ebdc3f238876e94e59d6214ab465d644b460 at 2021-02-10 23:09:59 (took 0.811252s) : 10835693456077075259 - hi