0%

2021年8月4日下午,从上海启程前往亚特兰大。因为南京的疫情,许多国内航班都被取消了,浦东机场显得有些空落落。候机大厅里只有两家商店还开着,卖一点零食方便面之类的。航班经停首尔,目的地阿姆斯特丹,原本预计20:30起飞,由于防疫相关的消毒和检查被推迟了半小时。旅客大多是国人,不少都穿着防护服,临登机前互相检查穿戴,询问各自旅程的最终目的地。飞机上邻座是一个大二学生,十几个同学一起去柏林留学,第一次乘坐国际航班的兴奋溢于言表。

航班飞过赫尔辛基和斯德哥尔摩,在奥斯陆和哥德堡转向,在当地时间凌晨四点多抵达阿姆斯特丹。第一次踏足欧洲脑海里有很多想象,看着大屏幕上的航班信息,幻想着飞往伦敦、巴黎、布拉格……在机场看了日出,买了咖啡和麦当劳填了肚子,看了一整天的小说。

17:00起飞前往亚特兰大,20:30落地。取完行李乘shuttle bus到地铁站,22:00到住处。



Information provided is for general informational purposes only. No representation or warranty of any kind.

《姐姐,今夜我在德令哈》 海子

姐姐,今夜我在德令哈,夜色笼罩
姐姐,我今夜只有戈壁

草原尽头我两手空空
悲痛时握不住一颗泪滴
姐姐,今夜我在德令哈
这是雨水中一座荒凉的城

除了那些路过的和居住的
德令哈……今夜
这是唯一的,最后的,抒情。
这是唯一的,最后的,草原。

我把石头还给石头
让胜利的胜利
今夜青稞只属于他自己
一切都在生长

今夜我只有美丽的戈壁 空空
姐姐,今夜我不关心人类,我只想你。

Installation

Go to official Apache Spark download webpage and select Spark release version 3.1.1 and package type Source Code. Unzip the downloaded package and build Spark by Apache Maven.

1
./build/mvn -DskipTests clean package

Example Programs

SparkPi

SparkPi is a compute-intensive task which estimates pi by “throwing darts” at a circle. Random points in the unit square ((0, 0) to (1,1)) are picked and we can see how many fall in the unit circle. The fraction should be pi/4, so we use this to get the estimate.

1
2
# Usage: pi [partitions]
./bin/run-example SparkPi 10/1000/10000

For 10 partitions, the pi estimation is roughly 3.1392951, and the running time is 0.496385s. For 100 partitions, the pi estimation is roughly 3.1413819, and the running time is 3.576282s. For 1000 partitions, the pi estimation is roughly 3.1416965, and the running time is 43.483414s. For 10000 partitions, the pi estimation is roughly 3.1415915, and the running time is 363.989005s.

WordCount

Dataset

Gutenberg

To get big quantities of text without repetition, I crawl text files from Project Gutenberg.

1
2
3
4
5
6
7
8
9
# Create directory
mkdir Download/temp/
cd Download/temp/
# Crawl text files
wget -bqc -w 2 -m -H 'http://www.gutenberg.org/robot/harvest?filetypes[]=txt&langs[]=en'
# Extract text data
mkdir extracted/
find . -name '*.zip' -exec sh -c 'unzip -d extracted {}' ';'
cat extracted/*.txt > temp.txt

After crawling text data, remove special characters from the text file and fix encoding.

1
2
3
4
import re
string = open('temp.txt', encoding = "ISO-8859-1").read()
new_str = re.sub('[^a-zA-Z0-9\n\.]', ' ', string)
open('gutenberg.txt', 'w').write(new_str)
Yelp

Download Yelp dataset. I will focus on the review dataset which contains big quantities of text without repetition and can work as a benchmark. Before I run the example program on the dataset, I first need to convert the JSON file into a CSV file. Since the dataset is too large, I will process the first 1000000 reviews which is almost 6GB.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import json
import pandas as pd
review_json_path = 'yelp_academic_dataset_review.json'
size = 1000000
review = pd.read_json(review_json_path, lines=True, dtype={'review_id': str, 'user_id': str, 'business_id': str,
'stars': int, 'date': str, 'text': str, 'useful': int, 'funny': int, 'cool': int}, chunksize=size)
chunk_list = []
for chunk_review in review:
chunk_review = chunk_review.drop(
['review_id', 'user_id', 'business_id', 'stars', 'date', 'useful', 'funny', 'cool'], axis=1)
chunk_list.append(chunk_review)
df = pd.concat(chunk_list, ignore_index=True, join='outer', axis=0)
csv_name = "result.csv"
df.to_csv(csv_name, index=False)

After getting the CSV file, I also need to clean the data by removing special characters.

1
2
3
4
import re
string = open('yelp.csv').read()
new_str = re.sub('[^a-zA-Z0-9\n]', ' ', string)
open('result.txt', 'w').write(new_str)

Result

Gutenberg

Run the following command to count words in the dataset.

1
time ./bin/spark-submit examples/src/main/python/wordcount.py ../gutenberg.txt

The total running time is 0m6.037s.

Yelp

Run the following command to count words in the dataset.

1
time ./bin/spark-submit examples/src/main/python/wordcount.py ../yelp.txt

The total running time is 1m56.498s.

Configuration Impact

I choose to run an example MapReduce program sort to see the configuration impact.

Bug Fix

The provided python code for the sort program has a bug in line 35. Fix it by removing the translation from string to int.

1
2
3
sortedCount = lines.flatMap(lambda x: x.split(' ')) \
.map(lambda x: (x, 1)) \
.sortByKey()

Dataset Preparation

I will use the Yelp dataset as a benchmark to see the impact configuration. However, since the Yelp dataset I use in the previous section is too large (>5GB), the sort program will always report buffer overflow. So, I will only run the first 500MB dataset (xaa).

1
split -b 500m yelp.txt

Configuration File

The configuration file of Spark is provided as a template spark-defaults.conf.template. To modify the configuration, first copy the template and change the name to spark-defaults.conf, then I can change the settings by adding properties in this file.

Driver Memory

The first change in the configuration is spark.driver.memory and spark.driver.maxResultSize. The spark.driver.maxResultSize is set to 10G which is larger than the size of the dataset to ensure the serialization can complete. The driver memory is the amount of memory to use for the driver process. I have tested four values for the driver memory: 1G, 2G, 4G and 8G. The program can run by two commands.

1
2
3
4
# set config directly in command line
time ./bin/spark-submit --driver-memory 1g examples/src/main/python/sort.py ../xaa
# set config in config file
time ./bin/spark-submit --properties-file conf/spark-defaults.conf examples/src/main/python/sort.py ../xaa

1G

The Spark default value of driver memory is 1G. However, it’s too small for the testing dataset and the sort program will report Java heap space out of memory.

2G

After raising the driver memory to 2G, the sort program will still report Java heap space out of memory.

4G

The sort program will run successfully when the driver memory is raised to 4G. The program can correctly print out the sorted words and the total running time is 3m47.166s (including the printing time).

8G

The program can correctly print out the sorted words and the total running time is 3m47.200s (including the printing time), which is similar to 4G.

Reducer Size

The second property to modify is the spark.reducer.maxSizeInFlight. It is the maximum size of map outputs to fetch simultaneously from each reduce task. It represents a fixed memory overhead per reduce task. I would like to see if the fetch size will impact the performance. I will try three sizes: 12M, 48M, and 96M. The spark.reducer.maxSizeInFlight will be added to the property file and the program will run by the following command.

1
time ./bin/spark-submit --properties-file conf/spark-defaults.conf examples/src/main/python/sort.py ../xaa

12M

The running time with 12M fetching size is 3m58.065s including the result printing time.

48M

The Spark default reducer max size in flight is 48M. The running time with 12M fetching size is 4m0.725s including the result printing time.

96M

The running time with 12M fetching size is 3m56.303s including the result printing time.

Shuffle Buffer

The third property to change is the spark.shuffle.file.buffer. It is the size of the in-memory buffer for each shuffle file output stream. The buffers can reduce the number of disk seeks and system calls made in creating intermediate shuffle files so that larger buffers can theoretically result in better performance. I will try three different buffer sizes: 8K, 32K, and 1M. The following is the running command.

1
time ./bin/spark-submit --properties-file conf/spark-defaults.conf examples/src/main/python/sort.py ../xaa

8K

The running time for 8K shuffle buffer is 3m55.807s including result printing time.

32K

32K is the Spark default value for shuffle file buffer. The running time for 8K shuffle buffer is 3m59.308s including result printing time.

1M

The running time for 8K shuffle buffer is 3m59.725s including result printing time.

Serializer

The fourth property to change is the spark.serializer. It is the class to use for serializing objects that will be sent over the network or need to be cached in serialized form. It is said that the default serializer JavaSerializer doesn’t perform well enough. I will try two different buffer sizes: default one and KryoSerializer. The following is the running command.

1
time ./bin/spark-submit --properties-file conf/spark-defaults.conf examples/src/main/python/sort.py ../xaa

JavaSerializer

The running time without printing out results for default serializer is 6.6012s.

KryoSerializer

The running time without printing out results for default serializer is 6.5324s. In order to run the program without buffer overflow, I set the spark.kryoserializer.buffer.max to 2047M.

Thread Number

Since I am running Spark on local mode, I can’t control how many cores the program uses to run the benchmark, but I can set how many threads the local program is running on within the limit of the number of logical cores of the device. I have tried five different thread number: 1, 2, 4, 8 and 12. The default thread number is 2. The following is the command running the program.

1
time ./bin/spark-submit --master local[1] --properties-file conf/spark-defaults.conf examples/src/main/python/sort.py ../xaa

Summary

Thread Number Running Time
1 32.909s
2 18.484s
4 9.565s
8 7.100s
12 6.570s

Here's something encrypted, password is required to continue reading.
Read more »

Introduction

Hyperledger Fabric is an open source enterprise-grade permissioned distributed ledger technology (DLT) platform, designed for use in enterprise contexts. It is the first distributed ledger platform to support smart contracts authored in general-purpose programming languages such as Java, Go and Node.js, rather than constrained domain-specific languages (DSL).

Installation

Prerequisite

Update

1
2
sudo apt update
sudo apt upgrade

Git

1
sudo apt-get install git

cURL

1
sudo apt-get install curl

Docker

1
sudo apt-get -y install docker-compose

Confirm installation and version.

1
2
3
4
zhiqich@ubuntu:~$ docker --version
Docker version 19.03.8, build afacb8b7f0
zhiqich@ubuntu:~$ docker-compose --version
docker-compose version 1.25.0, build unknown

Start Docker daemon and add user to Docker group.

1
2
3
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -a -G docker zhiqich

(Optional) Go

Download Linux Go installer from Go official webpage.

1
https://golang.org/dl/go1.16.linux-amd64.tar.gz

Extract package into /usr/local (or other location such as $HOME) and create a Go tree.

1
sudo tar -C /usr/local -xzf go1.16.linux-amd64.tar.gz

Add /usr/local/go/bin to the PATH environment variable.

1
2
vim $HOME/.profile
export PATH=$PATH:/usr/local/go/bin

Apply change and confirm installation.

1
2
source $HOME/.profile
go version

(Optional) JQ

1
sudo apt-get install jq

Fabric Sample

Download fabric-samples and related Docker images and binaries.

1
curl -sSL https://bit.ly/2ysbOFE | bash -s

If Go language is preferred, install Fabric under Go directory, or set GOPATH to Go workspace.

1
2
mkdir -p $HOME/go/src/github.com/zhiqich
cd $HOME/go/src/github.com/zhiqich

Fabric Application SDK

Node.js

1
2
sudo apt install nodejs
sudo apt install npm

Java

1
sudo apt install openjdk-8-jre-headless

Run Fabric

Test Network

Bring Up Test Network

Annotated script network.sh in directory test-network stands up a Fabric network using Docker images.

1
2
cd fabric-samples/test-network
./network.sh -h

Usage of network.sh.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Usage: 
network.sh <Mode> [Flags]
Modes:
up - Bring up Fabric orderer and peer nodes. No channel is created
up createChannel - Bring up fabric network with one channel
createChannel - Create and join a channel after the network is created
deployCC - Deploy a chaincode to a channel (defaults to asset-transfer-basic)
down - Bring down the network

Flags:
Used with network.sh up, network.sh createChannel:
-ca <use CAs> - Use Certificate Authorities to generate network crypto material
-c <channel name> - Name of channel to create (defaults to "mychannel")
-s <dbtype> - Peer state database to deploy: goleveldb (default) or couchdb
-r <max retry> - CLI times out after certain number of attempts (defaults to 5)
-d <delay> - CLI delays for a certain number of seconds (defaults to 3)
-verbose - Verbose mode

Used with network.sh deployCC
-c <channel name> - Name of channel to deploy chaincode to
-ccn <name> - Chaincode name.
-ccl <language> - Programming language of the chaincode to deploy: go, java, javascript, typescript
-ccv <version> - Chaincode version. 1.0 (default), v2, version3.x, etc
-ccs <sequence> - Chaincode definition sequence. Must be an integer, 1 (default), 2, 3, etc
-ccp <path> - File path to the chaincode.
-ccep <policy> - (Optional) Chaincode endorsement policy using signature policy syntax. The default policy requires an endorsement from Org1 and Org2
-cccg <collection-config> - (Optional) File path to private data collections configuration file
-cci <fcn name> - (Optional) Name of chaincode initialization function. When a function is provided, the execution of init will be requested and the function will be invoked.

-h - Print this message

Possible Mode and flag combinations
up -ca -r -d -s -verbose
up createChannel -ca -c -r -d -s -verbose
createChannel -c -r -d -verbose
deployCC -ccn -ccl -ccv -ccs -ccp -cci -r -d -verbose

Examples:
network.sh up createChannel -ca -c mychannel -s couchdb
network.sh createChannel -c channelName
network.sh deployCC -ccn basic -ccp ../asset-transfer-basic/chaincode-javascript/ -ccl javascript
network.sh deployCC -ccn mychaincode -ccp ./user/mychaincode -ccv 1 -ccl javascript

Turn down and remove any container or artifact from previous runs. Then bring up new network.

1
2
./network.sh down
./network.sh up

Bring up network with certificate authorities by flag -ca.

1
./network.sh up -ca

Check running Docker containers and components of test network.

1
docker ps -a

Create Channel

Create channel with default name of mychannel.

1
./network.sh createChannel

Create channel with custom name by flag -c.

1
./network.sh createChannel -c channel

Start Chaincode

Start a chain code on channel through preferred SDK (Java, Go, Javascript).

1
./network.sh deployCC -ccn basic -ccp ../asset-transfer-basic/chaincode-go -ccl go

Fabric Application

Introduction

The tutorial provides an introduction to how Fabric applications interact with deployed blockchain networks. It uses sample programs built using Fabric SDK to invoke a smart contract which queries and updates the ledger with smart contract API. It also uses sample programs and a deployed Certificate Authority to generate X.509 certificates that an application needs to interact with a permissioned blockchain.

Asset Transfer

Asset Transfer basic sample demonstrates how to initialize a ledger with assets, query assets, create new assets, update assets and transfer an asset to a new owner. It has two components: application and smart contract. The application makes calls to the blockchain network to invoke transactions implemented in the chaincode (smart contract). The smart contract implements the transactions that involve interactions with the ledger.

Set Up Blockchain Network

Launch network by using network.sh script. Bring down current running network and bring up a new one with Certificate Authorities.

1
2
3
cd fabric-samples/test-network
./network.sh down
./network.sh up createChannel -c mychannel -ca

The script will deploy the Fabric test network with two peers, an ordering service and three certificate authorities.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Creating channel 'mychannel'.
If network is not up, starting nodes with CLI timeout of '5' tries and CLI delay of '3' seconds and using database 'leveldb with crypto from 'Certificate Authorities'
Bringing up network
LOCAL_VERSION=2.3.1
DOCKER_IMAGE_VERSION=2.3.1
CA_LOCAL_VERSION=1.4.9
CA_DOCKER_IMAGE_VERSION=1.4.9
Generating certificates using Fabric CA
Creating network "fabric_test" with the default driver
Creating ca_org2 ... done
Creating ca_orderer ... done
Creating ca_org1 ... done
...
Generating CCP files for Org1 and Org2
Creating volume "docker_orderer.example.com" with default driver
Creating volume "docker_peer0.org1.example.com" with default driver
Creating volume "docker_peer0.org2.example.com" with default driver
WARNING: Found orphan containers (ca_org2, ca_orderer, ca_org1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Creating orderer.example.com ... done
Creating peer0.org2.example.com ... done
Creating peer0.org1.example.com ... done
Creating cli ... done
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb268c151387 hyperledger/fabric-tools:latest "/bin/bash" Less than a second ago Up Less than a second cli
82354eb9645d hyperledger/fabric-peer:latest "peer node start" 2 seconds ago Up Less than a second 0.0.0.0:7051->7051/tcp peer0.org1.example.com
96303fd44a6b hyperledger/fabric-peer:latest "peer node start" 2 seconds ago Up Less than a second 7051/tcp, 0.0.0.0:9051->9051/tcp peer0.org2.example.com
9c37906e598b hyperledger/fabric-orderer:latest "orderer" 2 seconds ago Up Less than a second 0.0.0.0:7050->7050/tcp, 0.0.0.0:7053->7053/tcp orderer.example.com
4179c79f4d47 hyperledger/fabric-ca:latest "sh -c 'fabric-ca-se…" 6 seconds ago Up 5 seconds 0.0.0.0:7054->7054/tcp ca_org1
33810360a9c3 hyperledger/fabric-ca:latest "sh -c 'fabric-ca-se…" 6 seconds ago Up 5 seconds 7054/tcp, 0.0.0.0:9054->9054/tcp ca_orderer
fdf33f12e860 hyperledger/fabric-ca:latest "sh -c 'fabric-ca-se…" 6 seconds ago Up 5 seconds 7054/tcp, 0.0.0.0:8054->8054/tcp ca_org2
...

Deploy the chaincode with the chaincode name and language options.

1
./network.sh deployCC -ccn basic -ccp ../asset-transfer-basic/chaincode-javascript/ -ccl javascript

The chaincode will be successfully deployed.

1
2
3
4
5
6
7
8
9
10
11
deploying chaincode on channel 'mychannel'
executing with the following
- CHANNEL_NAME: mychannel
- CC_NAME: basic
- CC_SRC_PATH: ../asset-transfer-basic/chaincode-javascript/
- CC_SRC_LANGUAGE: javascript
...
Committed chaincode definition for chaincode 'basic' on channel 'mychannel':
Version: 1.0, Sequence: 1, Endorsement Plugin: escc, Validation Plugin: vscc, Approvals: [Org1MSP: true, Org2MSP: true]
Query chaincode definition successful on peer0.org2 on channel 'mychannel'
Chaincode initialization is not required

Sample Application

Install application dependencies for sample programs developed using Fabric SDK for Node.js.

1
2
cd asset-transfer-basic/application-javascript
npm install

Dependencies defined in package.json will be installed and the directory will contain following files.

1
2
zhiqich@ubuntu:~/fabric-samples/asset-transfer-basic/application-javascript$ ls
app.js node_modules package.json package-lock.json

Run application by the following command.

1
node app.js

And results look like following.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
Loaded the network configuration located at /home/zhiqich/fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/connection-org1.json
Built a CA Client named ca-org1
Built a file system wallet at /home/zhiqich/fabric-samples/asset-transfer-basic/application-javascript/wallet
Successfully enrolled admin user and imported it into the wallet
Successfully registered and enrolled user appUser and imported it into the wallet

--> Submit Transaction: InitLedger, function creates the initial set of assets on the ledger
*** Result: committed

--> Evaluate Transaction: GetAllAssets, function returns all the current assets on the ledger
*** Result: [
{
"Key": "asset1",
"Record": {
"ID": "asset1",
"Color": "blue",
"Size": 5,
"Owner": "Tomoko",
"AppraisedValue": 300,
"docType": "asset"
}
},
{
"Key": "asset2",
"Record": {
"ID": "asset2",
"Color": "red",
"Size": 5,
"Owner": "Brad",
"AppraisedValue": 400,
"docType": "asset"
}
},
{
"Key": "asset3",
"Record": {
"ID": "asset3",
"Color": "green",
"Size": 10,
"Owner": "Jin Soo",
"AppraisedValue": 500,
"docType": "asset"
}
},
{
"Key": "asset4",
"Record": {
"ID": "asset4",
"Color": "yellow",
"Size": 10,
"Owner": "Max",
"AppraisedValue": 600,
"docType": "asset"
}
},
{
"Key": "asset5",
"Record": {
"ID": "asset5",
"Color": "black",
"Size": 15,
"Owner": "Adriana",
"AppraisedValue": 700,
"docType": "asset"
}
},
{
"Key": "asset6",
"Record": {
"ID": "asset6",
"Color": "white",
"Size": 15,
"Owner": "Michel",
"AppraisedValue": 800,
"docType": "asset"
}
}
]

--> Submit Transaction: CreateAsset, creates new asset with ID, color, owner, size, and appraisedValue arguments
*** Result: committed
*** Result: {
"ID": "asset13",
"Color": "yellow",
"Size": "5",
"Owner": "Tom",
"AppraisedValue": "1300"
}

--> Evaluate Transaction: ReadAsset, function returns an asset with a given assetID
*** Result: {
"ID": "asset13",
"Color": "yellow",
"Size": "5",
"Owner": "Tom",
"AppraisedValue": "1300"
}

--> Evaluate Transaction: AssetExists, function returns "true" if an asset with given assetID exist
*** Result: true

--> Submit Transaction: UpdateAsset asset1, change the appraisedValue to 350
*** Result: committed

--> Evaluate Transaction: ReadAsset, function returns "asset1" attributes
*** Result: {
"ID": "asset1",
"Color": "blue",
"Size": "5",
"Owner": "Tomoko",
"AppraisedValue": "350"
}

--> Submit Transaction: UpdateAsset asset70, asset70 does not exist and should return an error
2021-03-05T17:06:59.733Z - error: [Transaction]: Error: No valid responses from any peers. Errors:
peer=peer0.org2.example.com:9051, status=500, message=error in simulation: transaction returned with failure: Error: The asset asset70 does not exist
peer=peer0.org1.example.com:7051, status=500, message=error in simulation: transaction returned with failure: Error: The asset asset70 does not exist
*** Successfully caught the error:
Error: No valid responses from any peers. Errors:
peer=peer0.org2.example.com:9051, status=500, message=error in simulation: transaction returned with failure: Error: The asset asset70 does not exist
peer=peer0.org1.example.com:7051, status=500, message=error in simulation: transaction returned with failure: Error: The asset asset70 does not exist

--> Submit Transaction: TransferAsset asset1, transfer to new owner of Tom
*** Result: committed

--> Evaluate Transaction: ReadAsset, function returns "asset1" attributes
*** Result: {
"ID": "asset1",
"Color": "blue",
"Size": "5",
"Owner": "Tom",
"AppraisedValue": "350"
}

Enroll Admin User

Admin registration is bootstrapped when the Certificate Authority is started, and it is executed right after the profile and wallet paths are specified.

1
2
3
4
5
6
7
8
9
10
11
12
// build an in memory object with the network configuration (also known as a connection profile)
const ccp = buildCCPOrg1();

// build an instance of the fabric ca services client based on
// the information in the network configuration
const caClient = buildCAClient(FabricCAServices, ccp, 'ca.org1.example.com');

// setup the wallet to hold the credentials of the application user
const wallet = await buildWallet(Wallets, walletPath);

// in a real application this would be done on an administrative flow, and only once
await enrollAdmin(caClient, wallet, mspOrg1);

After successfully enrolling admin, the log will print the following.

1
2
3
Built a CA Client named ca-org1
Built a file system wallet at /home/zhiqich/fabric-samples/asset-transfer-basic/application-javascript/wallet
Successfully enrolled admin user and imported it into the wallet

Register and Enroll Application User

Application users are registered and enrolled by the admin user, and they will be used to interact with the blockchain network.

1
2
3
// in a real application this would be done only when a new user was required to be added
// and would be part of an administrative flow
await registerAndEnrollUser(caClient, wallet, mspOrg1, org1UserId, 'org1.department1');

After successfully enrolling user, the log will print the following.

1
Successfully registered and enrolled user appUser and imported it into the wallet

Prepare Connection to Channel and Smart Contract

Use contract name and channel name to get reference to the Contract via Gateway. getContract() API is provided.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// setup the gateway instance
// The user will now be able to create connections to the fabric network and be able to
// submit transactions and query. All transactions submitted by this gateway will be
// signed by this user using the credentials stored in the wallet.
await gateway.connect(ccp, {
wallet,
identity: org1UserId,
discovery: { enabled: true, asLocalhost: true } // using asLocalhost as this gateway is using a fabric network deployed locally
});

// Build a network instance based on the channel where the smart contract is deployed
const network = await gateway.getNetwork(channelName);

// Get the contract from the network.
const contract = network.getContract(chaincodeName);

Initialize Ledger

The submitTransaction() function is used to invoke the chaincode InitLedger() function to populate the ledger with some sample data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// Initialize a set of asset data on the channel using the chaincode 'InitLedger' function.
// This type of transaction would only be run once by an application the first time it was started after it
// deployed the first time. Any updates to the chaincode deployed later would likely not need to run
// an "init" type function.
console.log('\n--> Submit Transaction: InitLedger, function creates the initial set of assets on the ledger');
await contract.submitTransaction('InitLedger');
console.log('*** Result: committed');
...
async InitLedger(ctx) {
const assets = [
{
ID: 'asset1',
Color: 'blue',
Size: 5,
Owner: 'Tomoko',
AppraisedValue: 300,
},
...

The log after successful initialization will be the following.

1
2
--> Submit Transaction: InitLedger, function creates the initial set of assets on the ledger
*** Result: committed

Invoke Chaincode Function

Each peer in a blockchain network hosts a copy of the ledger. An application program can view the most recent data from the ledger using read-only invocations of a smart contract (query). Applications can query data (key-value pairs) for a single or multiple keys. JSON query is also supported. The evaluateTransaction function is used to call GetAllAssets() function which returns all assets found in the world state.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
// Let's try a query type operation (function).
// This will be sent to just one peer and the results will be shown.
console.log('\n--> Evaluate Transaction: GetAllAssets, function returns all the current assets on the ledger');
let result = await contract.evaluateTransaction('GetAllAssets');
console.log(`*** Result: ${prettyJSONString(result.toString())}`);
...
// GetAllAssets returns all assets found in the world state.
async GetAllAssets(ctx) {
const allResults = [];
// range query with empty string for startKey and endKey does an open-ended query of all assets in the chaincode namespace.
const iterator = await ctx.stub.getStateByRange('', '');
let result = await iterator.next();
while (!result.done) {
const strValue = Buffer.from(result.value.value.toString()).toString('utf8');
let record;
try {
record = JSON.parse(strValue);
} catch (err) {
console.log(err);
record = strValue;
}
allResults.push({ Key: result.value.key, Record: record });
result = await iterator.next();
}
return JSON.stringify(allResults);
}

The results should be the following.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
--> Evaluate Transaction: GetAllAssets, function returns all the current assets on the ledger
*** Result: [
{
"Key": "asset1",
"Record": {
"ID": "asset1",
"Color": "blue",
"Size": 5,
"Owner": "Tomoko",
"AppraisedValue": 300,
"docType": "asset"
}
},
...

The function submitTransaction() can also be used to call function CreateAsset() to add a new asset to the world state.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// CreateAsset issues a new asset to the world state with given details.
async CreateAsset(ctx, id, color, size, owner, appraisedValue) {
const asset = {
ID: id,
Color: color,
Size: size,
Owner: owner,
AppraisedValue: appraisedValue,
};
return ctx.stub.putState(id, Buffer.from(JSON.stringify(asset)));
}
...
--> Submit Transaction: CreateAsset, creates new asset with ID, color, owner, size, and appraisedValue arguments
*** Result: committed
*** Result: {
"ID": "asset13",
"Color": "yellow",
"Size": "5",
"Owner": "Tom",
"AppraisedValue": "1300"
}

The chaincode also provides AssetExists(), ReadAsset() functions called by evaluateTransaction() function and UpdateAsset(), TransferAsset() functions called by submitTransaction() function to check whether a specific asset exists, get specific asset data, update a specific asset, and transfer a specific asset to a new owner.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
// AssetExists returns true when asset with given ID exists in world state.
async AssetExists(ctx, id) {
const assetJSON = await ctx.stub.getState(id);
return assetJSON && assetJSON.length > 0;
}
// ReadAsset returns the asset stored in the world state with given id.
async ReadAsset(ctx, id) {
const assetJSON = await ctx.stub.getState(id); // get the asset from chaincode state
if (!assetJSON || assetJSON.length === 0) {
throw new Error(`The asset ${id} does not exist`);
}
return assetJSON.toString();
}
// UpdateAsset updates an existing asset in the world state with provided parameters.
async UpdateAsset(ctx, id, color, size, owner, appraisedValue) {
const exists = await this.AssetExists(ctx, id);
if (!exists) {
throw new Error(`The asset ${id} does not exist`);
}
// overwriting original asset with new asset
const updatedAsset = {
ID: id,
Color: color,
Size: size,
Owner: owner,
AppraisedValue: appraisedValue,
};
return ctx.stub.putState(id, Buffer.from(JSON.stringify(updatedAsset)));
}
// TransferAsset updates the owner field of asset with given id in the world state.
async TransferAsset(ctx, id, newOwner) {
const assetString = await this.ReadAsset(ctx, id);
const asset = JSON.parse(assetString);
asset.Owner = newOwner;
return ctx.stub.putState(id, Buffer.from(JSON.stringify(asset)));
}

Issue

Due to the change of IPHETH_BUF_SIZE parameter from 1516 to 1514 in iOS14, iPhone could not provide tethering connection to Ubuntu20.04 LTS.

Solution

Modify the IPHETH_BUF_SIZE parameter in the source file ipheth.c.

1
2
3
4
5
6
sudo -i
cd /lib/modules/$(uname -r)/kernel/drivers/net/usb/
cp -ia ipheth.ko ipheth.ko.orig
xxd -p -c 20000 ipheth.ko.orig | sed 's/ec05/ea05/g' | xxd -r -p > ipheth.ko
strip --strip-debug ipheth.ko
rmmod ipheth; modprobe ipheth

Prerequisite

Install sudo apt install build-essential if there’s Wi-Fi or ethernet connection. If there’s no internet access available on the device, try downloading Ubuntu deb packages of build-essential Download and installing with command sudo dpkg -i package_name.deb.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
sudo dpkg -i libc6_2.31-0ubuntu9.1_amd64.deb
sudo dpkg -i manpages-dev_5.05-1_all.deb
sudo dpkg -i binutils-common_2.34-6ubuntu1_amd64.deb
sudo dpkg -i linux-libc-dev_5.4.0-48.52_amd64.deb
sudo dpkg -i libctf-nobfd0_2.34-6ubuntu1_amd64.deb
sudo dpkg -i libgomp1_10.2.0-5ubuntu1~20.04_amd64.deb
sudo dpkg -i libquadmath0_10.2.0-5ubuntu1~20.04_amd64.deb
sudo dpkg -i libmpc3_1.1.0-1_amd64.deb
sudo dpkg -i libatomic1_10.2.0-5ubuntu1~20.04_amd64.deb
sudo dpkg -i libubsan1_10.2.0-5ubuntu1~20.04_amd64.deb
sudo dpkg -i libcrypt-dev_4.4.10-10ubuntu4_amd64.deb
sudo dpkg -i libisl22_0.22.1-1_amd64.deb
sudo dpkg -i libbinutils_2.34-6ubuntu1_amd64.deb
sudo dpkg -i libc-dev-bin_2.31-0ubuntu9.1_amd64.deb
sudo dpkg -i libcc1-0_10.2.0-5ubuntu1~20.04_amd64.deb
sudo dpkg -i liblsan0_10.2.0-5ubuntu1~20.04_amd64.deb
sudo dpkg -i libitm1_10.2.0-5ubuntu1~20.04_amd64.deb
sudo dpkg -i gcc-9-base_9.3.0-10ubuntu2_amd64.deb
sudo dpkg -i libtsan0_10.2.0-5ubuntu1~20.04_amd64.deb
sudo dpkg -i libctf0_2.34-6ubuntu1_amd64.deb
sudo dpkg -i libasan5_9.3.0-10ubuntu2_amd64.deb
sudo dpkg -i cpp-9_9.3.0-10ubuntu2_amd64.deb
sudo dpkg -i libc6-dev_2.31-0ubuntu9.1_amd64.deb
sudo dpkg -i binutils-x86-64-linux-gnu_2.34-6ubuntu1_amd64.deb
sudo dpkg -i binutils_2.34-6ubuntu1_amd64.deb
sudo dpkg -i libgcc-9-dev_9.3.0-10ubuntu2_amd64.deb
sudo dpkg -i cpp_9.3.0-1ubuntu2_amd64.deb
sudo dpkg -i gcc-9_9.3.0-10ubuntu2_amd64.deb
sudo dpkg -i gcc_9.3.0-1ubuntu2_amd64.deb


Copyright: Zhihu 「宋泠雨」, CC 4.0 BY-SA
https://zhuanlan.zhihu.com/p/342499361\
Copyright: CSDN 「恍恍惚惚斯基」, CC 4.0 BY-SA
https://blog.csdn.net/weixin_42432439/article/details/108777302

《繁花》 金宇澄

阿宝十岁,
邻居蓓蒂六岁。
两个人从假三层爬上屋顶,
瓦片温热,眼里是半个卢湾区……
蓓蒂拉紧阿宝,
小身体靠紧,
头发飞舞。
东南风一劲,
听见黄浦江船鸣,
圆号宽广的嗡嗡声,
抚慰少年人胸怀。

OpenDHT is a lightweight C++14 Distributed Hash Table implementation. It provides an easy to use distributed in-memory data store. Every node in the network can read and write values to the store. Values are distributed over the network, with redundancy.

Build

Tool

  • Operating system
    1
    Linux ubuntu 5.8.0-43-generic #49~20.04.1-Ubuntu SMP Fri Feb 5 09:57:56 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • GCC
    1
    2
    3
    4
    gcc (Ubuntu 10.2.0-5ubuntu1~20.04) 10.2.0
    Copyright (C) 2020 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions. There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  • Clang
    1
    2
    3
    4
    clang version 11.0.0
    Target: x86_64-unknown-linux-gnu
    Thread model: posix
    InstalledDir: /usr/local/bin
  • CMake
    1
    2
    3
    cmake version 3.19.4

    CMake suite maintained and supported by Kitware (kitware.com/cmake).

Dependency

  • OpenDHT dependencies
    1
    sudo apt install libncurses5-dev libreadline-dev nettle-dev libgnutls28-dev libargon2-0-dev libmsgpack-dev  libssl-dev libfmt-dev libjsoncpp-dev libhttp-parser-dev libasio-dev
  • Python binding dependencies
    1
    sudo apt-get install cython3 python3-dev python3-setuptools
  • Restinio
    1
    2
    3
    4
    5
    6
    7
    8
    mkdir restinio && cd restinio
    wget https://github.com/aberaud/restinio/archive/2c0b6f5e5ba04d7a74e8406a3df1fd433680599d.tar.g
    ls -l && tar -xzf 2c0b6f5e5ba04d7a74e8406a3df1fd433680599d.tar.gz
    cd restinio-2c0b6f5e5ba04d7a74e8406a3df1fd433680599d/dev
    cmake -DCMAKE_INSTALL_PREFIX=/usr -DRESTINIO_TEST=OFF -DRESTINIO_SAMPLE=OFF -DRESTINIO_INSTALL_SAMPLES=OFF -DRESTINIO_BENCH=OFF -DRESTINIO_INSTALL_BENCHES=OFF -DRESTINIO_FIND_DEPS=ON -DRESTINIO_ALLOW_SOBJECTIZER=Off -DRESTINIO_USE_BOOST_ASIO=none .
    make -j2
    sudo make install
    cd ../../../ && rm -rf restinio

Build

  • Clone the repo
    1
    git clone https://github.com/savoirfairelinux/opendht.git
  • Build and install
    1
    2
    3
    4
    5
    cd opendht
    mkdir build && cd build
    cmake -DOPENDHT_PYTHON=ON -DCMAKE_INSTALL_PREFIX=/usr ..
    make -j2
    sudo make install

Example

Python3

  • Example code
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    import opendht as dht

    node = dht.DhtRunner()
    node.run()

    # Join the network through any running node,
    # here using a known bootstrap node.
    node.bootstrap("bootstrap.jami.net", "4222")

    # blocking call (provide callback arguments to make the call non-blocking)
    node.put(dht.InfoHash.get("unique_key"), dht.Value(b'some binary data'))

    results = node.get(dht.InfoHash.get("unique_key"))
    for r in results:
    print(r)
  • Command line result
    1
    Value[id:640b4d1af8908cc4 data:736f6d652062696e6172792064617461]
  • Decode
    1
    2
    >>> bytearray.fromhex("736f6d652062696e6172792064617461").decode()
    'some binary data'

Dhtnode

Dhtnode is a command line tool which allows to run a DHT node and perform operations to the distributed in-memory data store.

Command Line Running

Dhtnode can run directly in command line. There are several parameters that can be set when creating a dht node.

1
dhtnode [-p local_port] [-b bootstrap_host:port] [-n netid] [-i] [-D] [-f] [-v [-l logfile|-L]] [-s|-d]
  • -b allows to specify a bootstrap node address (can be any running node of the DHT network)
  • -p allows to specify the local UDP port to bind (optional). If not set, will use any available port. Note that the default OpenDHT port (to ideally use for public nodes) is 4222
  • -D enables the multicast automatic local peer discovery mechanism

Command Line Interface

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
OpenDHT command line interface (CLI)
Possible commands:
h, help Print this help message.
x, quit Quit the program.
log Start/stop printing DHT logs.

Node information:
ll Print basic information and stats about the current node.
ls [key] Print basic information about current search(es).
ld [key] Print basic information about currently stored values on this node (or key).
lr Print the full current routing table of this node.

Operations on the DHT:
b <ip:port> Ping potential node at given IP address/port.
g <key> Get values at <key>.
l <key> Listen for value changes at <key>.
cl <key> <token> Cancel listen for <token> and <key>.
p <key> <str> Put string value at <key>.
pp <key> <str> Put string value at <key> (persistent version).
cpp <key> <id> Cancel persistent put operation for <key> and value <id>.
s <key> <str> Put string value at <key>, signed with our generated private key.
e <key> <dest> <str> Put string value at <key>, encrypted for <dest> with its public key (if found).
cc Trigger connectivity changed signal.

Indexation operations on the DHT:
il <name> <key> [exact match] Lookup the index named <name> with the key <key>.
Set [exact match] to 'false' for inexact match lookup.
ii <name> <key> <value> Inserts the value <value> under the key <key> in the index named <name>.

Demo

  • Create node
    1
    2
    3
    zhiqich@ubuntu:~/Documents/opendht$ dhtnode -D
    OpenDHT node db159e7c839b5872692b244deeb849eb8195233b running on port 39901
    (type 'h' or 'help' for a list of possible commands)
  • Check stats
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    >> ll
    >> OpenDHT node db159e7c839b5872692b244deeb849eb8195233b running on port 39901
    1 ongoing operations
    IPv4 stats:
    Known nodes: 0 good, 0 dubious, 0 incoming.
    0 searches, 0 total cached nodes

    IPv6 stats:
    Known nodes: 0 good, 0 dubious, 0 incoming.
    0 searches, 0 total cached nodes
  • Put value
    1
    2
    3
    >> p test "this is a test string"
    Using h(test) = a94a8fe5ccb19ba61c4c0873d391e987982fbbd3
    >> Put: success, took 1.36 ms. Value ID: 8530bb7894ee21b1
  • Get value
    1
    2
    3
    4
    5
    >> g test
    Using h(test) = a94a8fe5ccb19ba61c4c0873d391e987982fbbd3
    >> Get: found 1 value(s) after 178 us
    Value[id:8530bb7894ee21b1 data(text/plain):""this"]
    Get: completed, took 1.4 ms

Dhtchat

Dhtchat is a simple IM client working over the dht.

Usage

1
2
3
4
5
zhiqich@ubuntu:~/Documents/opendht$ dhtchat -h
Usage: dhtchat [-n network_id] [-p local_port] [-b bootstrap_host[:port]]

dhtchat, a simple OpenDHT command line chat client.
Report bugs to: https://opendht.net
  • Flags are similar to those of Dhtnode
  • -D enables the multicast automatic local peer discovery mechanism

Command Line Interface

  • c {channel} to join a channel
  • d to disconnect the channel
  • e {target} {message} to send an encrypted message to a specific user with public key ID

Demo

  • Create chat node
    1
    2
    3
    4
    zhiqich@ubuntu:~/Documents/opendht$ dhtchat -D
    OpenDHT node 1f04c73f6c5af467cd68ba4058a065bc0731ac19 running on port 59537
    Public key ID ecd52bfcb037b225e9021d5149a84a17eb881f4b
    type 'c {hash}' to join a channel
  • Join channel
    1
    2
    > c test
    Joining h(test) = a94a8fe5ccb19ba61c4c0873d391e987982fbbd3
  • Chat
    1
    2
    >> hi
    >> d108ebdc3f238876e94e59d6214ab465d644b460 at 2021-02-10 23:09:59 (took 0.811252s) : 10835693456077075259 - hi