0%

We'll find a way; we always have.

After we have written the application after several months of hard work, how to deploy it? Let’s use a simple example of Hello Worldto learn.

The project structure is as follows:

1
2
3
.
├── go.mod
└── hello.go

The code content of hello.gois as follows:

1
2
3
4
package main
func main() {
println("hello world!")
}

In order to keep up with the trend, we choose to use Docker deployment here.

First attempt.

For convenience, we are going to put all the content into Docker for compilation, and after some research, we get the following Dockerfile file:

1
2
3
4
5
FROM golang:alpine
WORKDIR /build
COPY hello.go .
RUN go build -o hello hello.go
CMD ["./hello"]

Next start building.

1
2
3
4
5
6
7
8
9
$ docker build -t hello:v1 .
$ docker run -it --rm hello:v1 ls -l /build
total 1260
-rwxr-xr-x 1 root root 1281547 Mar 6 15:54 hello
-rw-r--r-- 1 root root 55 Mar 6 14:59 hello.go

# try to run it
$ docker run -it --rm hello:v1
hello world!

It runs successfully, and then we look at the size of the image.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ docker images | grep hello
hello v1 2783ee221014 44 minutes ago 314MB
````

Shocked me, the whole image actually has 314MB, just docker build, what happened?

Although it can be run, the size of this image is too scary, we just simply printed a line of hello world, and the size of the image is more than 300 MB, which is too unreasonable and needs to be optimized.


### Second attempt.
After looking for the information, I found that the base image we used was too large.

```golang
$ docker images | grep golang
golang alpine d026981a7165 2 days ago 313MB

A friend told me that I can compile the code first, and then copy it in, so I don’t need that huge base image, but it’s easy to say, I still spent some time learning, and finally the Dockerfile looks like this:

1
2
3
4
FROM alpine
WORKDIR /build
COPY hello .
CMD ["./hello"]

Let’s rebuild the image:

1
2
3
4
5
6
7
$ docker build -t hello:v2 .
...
=> ERROR [3/3] COPY hello . 0.0s
------
> [3/3] COPY hello .:
------
failed to compute cache key: "/hello" not found: not found

Oh hoo, wrong report. The prompt hello cannot be found, so I forgot to compile hello.go first and execute it again.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ go build -o hello hello.go
$ docker run -it --rm hello:v2
standard_init_linux.go:228: exec user process caused: exec format error
````

Whoops, failed again.

Well, the format is wrong, it turns out that our development machine is not linux, Don’t give up, let’s do it again.

```shell
$ GOOS=linux go build -o hello hello.go
$ docker build -t hello:v2 .
# ...
Successfully

Finally, the build is successful, let’s try it out.

1
2
$ docker run -it --rm hello:v2
hello world!

No problem, let’s take a look at the content and size.

1
2
3
4
5
6
7
$ docker run -it --rm hello:v2 ls -l /build
total 1252
-rwxr-xr-x 1 root root 1281587 Mar 6 16:18 hello

$ docker images | grep hello
hello v2 0dd53f016c93 53 seconds ago 6.61MB
hello v1 ac0e37173b85 25 minutes ago 314MB

Wow, it’s only 6.61MB this time, which is OK!

Third attempt.

Although the above image can be successfully built, there are still some shortcomings. It is not a multi-stage build.

We need to be able to build a docker image from Go code, which is divided into three steps:

  • Compile Go code natively, if it involves cgo the cross-platform compilation will be more troublesome.
  • Build a docker image with the compiled executable.
  • Write a shell script or makefile to get these steps in one command.
    Multi-stage builds are all about putting it all into one Dockerfile, no source code leaks, no scripting for cross-platform compilation, and a minimal image.

Loving to learn and striving for perfection, I ended up writing the following Dockerfile.

1
2
3
4
5
6
7
8
9
FROM golang:alpine AS builder
WORKDIR /build
ADD go.mod .
COPY . .
RUN go build -o hello hello.go
FROM alpine
WORKDIR /build
COPY --from=builder /build/hello /build/hello
CMD ["./hello"]

The first FROM starts with building a builder image in which the executable hellois compiled.

The part starting with the second FROM is to copy the executable hello from the first image, and use the smallest possible base image alpine to ensure that the final image is as small as possible.

As for why you don’t use a smaller scratch, it’s because there’s really nothing in scratch, and there is no chance to take a look if there is a problem, and alpine is only 5MB, which is good for our service will not have much impact.

Let’s run it first to verify:

1
2
$ docker run -it --rm hello:v3
hello world!

No problem, as expected! See what the size looks like:

1
2
3
4
$ docker images | grep hello
hello v3 f51e1116be11 8 hours ago 6.61MB
hello v2 0dd53f016c93 8 hours ago 6.61MB
hello v1 ac0e37173b85 8 hours ago 314MB

The size of the image built by the second method is exactly the same. Take a look at the contents of the mirror:

1
2
3
$ docker run -it --rm hello:v3 ls -l /build
total 1252
-rwxr-xr-x 1 root root 1281547 Mar 6 16:32 hello

Also, only one executable hello file builds perfectly!

https://blog.devgenius.io/tutorial-building-a-golang-application-docker-image-78e36d437c70

私(わたし)が年(とし)をとっても仲良(なかよ)くしてください。

ライブラリの紹介

今回見つけたライブラリは、
Robotgo
です。
Robotgoは、
Go言語でのデスクトップオートメーション。マウス、キーボード、ビットマップ、
画像を制御し、画面、プロセス、ウィンドウハンドル、グローバルイベントリスナーを読み取りが行える。
ライブラリです。
対応するOSは、
Windows、Mac、Linux
のようで、64bit/32bit両方に対応しています。

環境構築

今回、Windows、Linuxで動かしてみました。
それぞれの環境構築手順をまとめておきたいと思います。
なお、Windows、Linuxの詳細ですが、
Windows:Windows 10
Linux:Ubuntu20.04
を用いてみました。

まず、Robotgoをインストールする前の下準備をまとめておきます。

最初にWindowsの環境構築手順に関してまとめたいと思います。
といってもWIndowの場合、
MinGW-w64
をインストールするのみです。
すでにインストール済みでしたら他の作業は不要です。
インストール手順は下記にまとまっていますのでこちらをご参考にしてください。
といっても、インストーラを用いてインストールしてパスを通すのみなので簡単。
https://www.javadrive.jp/cstart/install/index6.html

後、もしかしたら下記が必要になるかもです。
(自分は必要でした)

1
go get github.com/lxn/win

次にLinux。
今回Ubuntuを用いたのでUbuntuの手順をまとめます。
下記ライブラリをapt-get する。以上です。

1
2
3
4
5
sudo apt install gcc libc6-dev
sudo apt install libx11-dev xorg-dev libxtst-dev libpng ++-dev
sudo apt install xcb libxcb-xkb-dev x11-xkb-utils libx11-xcb-dev libxkbcommon-x11-dev
sudo apt install libxkbcommon-dev
sudo apt install xsel xclip

後は、Windows、Linuxどちらも

1
go get github.com/go-vgo/robotgo

でインストールするのみです。

マウス操作自動化

では早速、マウス操作の自動化コードを作成してみます。
といっても今回は公式がご紹介しているサンプルを載せておきます。
公式:https://github.com/go-vgo/robotgo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
package main

import (
"github.com/go-vgo/robotgo"
)

func main() {
robotgo.ScrollMouse(10, "up")
robotgo.Scroll(100, 200)

robotgo.MoveMouse(10, 10)
robotgo.Drag(10, 10)

robotgo.MouseClick("left", true)
robotgo.MoveMouseSmooth(100, 200, 1.0, 100.0)
}
1
2
robotgo.ScrollMouse(10, "up")
robotgo.Scroll(100, 200)

でマウスのスクロールを行い、

1
robotgo.MoveMouse(10, 10)

でマウスを移動させ、

1
robotgo.MouseClick("left", true)

でマウスの左クリックを押下する。

キーボード入力

次はキーボード入力です。
こちらは公式で展開しているサンプルから少し変更を行い、

1
Hello WorldだんしゃりHi galaxy. こんにちは世界.

という文字を入力後、テキストファイルを保存する処理になります。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
func main() {
robotgo.TypeStr("Hello World")
robotgo.TypeStr("だんしゃり", 1.0)

robotgo.TypeStr("Hi galaxy. こんにちは世界.")
robotgo.Sleep(1)

robotgo.KeyTap("enter")
robotgo.KeyTap("s", "ctrl")

time.Sleep(1 * time.Second)

robotgo.TypeStr("こんにちは世界.")

robotgo.KeyTap("enter")

robotgo.WriteAll("Test")
text, err := robotgo.ReadAll()
if err == nil {
fmt.Println(text)
}
}
robotgo.TypeStr("Hello World")
robotgo.TypeStr("だんしゃり", 1.0)

robotgo.TypeStr("Hi galaxy. こんにちは世界.")

でキーボードによる文字入力を行い、

1
2
robotgo.KeyTap("enter")
robotgo.KeyTap("s", "ctrl")

で特定のキーボードをタップする形になります。

PyAutoGuiで言うところの、
pyautogui.write関数が、robotgo.TypeStr関数に該当し、
pyautogui.press関数が、robotgo.KeyTapに該当します。

1
2
robotgo.WriteAll("Test")
text, err := robotgo.ReadAll()

こちらは内部メモリへの文字列書き込みと読み出しを行う処理のようです。

■最後に

今回はgo言語で自動化が行える、Robotgoに関する環境構築や簡単なキーボード・マウス操作に関してまとめてみました。
ちょっと長くなってきたので今回はここまでとします。

まだ画像認識やイベント操作などが書けていないので、次回まとめたいと思います。

https://elsammit-beginnerblg.hatenablog.com/entry/2021/10/03/095838

Love means never having to say you're sorry.

With it easy to use go routines, you have unbridled power to harness concurrency in your programs. However, Go is not spared from race conditions. We still have to use mutex and atomic constructs to ensure that shared variables and their state are correct when read/write by go routines. The aim of this article is to see what could go wrong if you are not careful and how to avoid race conditions in your code.

Let’s create a simple program to demonstrate a shared variable.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
package mutexAtomicExample
import (
"fmt"
"sync"
"sync/atomic"
)
func Increment() {
var count int
var wg sync.WaitGroup //needed so that the function don't
//exit prematurely relative
//to all go routines
for i := 0; i < 100000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
count++
}()
}
wg.Wait()
fmt.Printf("count: %v for 100000 cycles\n", count)
}

In the increment function above, we use create 100,000 go routines where each of these will increment the count variable. We use a waitgroup to ensure the main program stays blocked [at the wg.Wait() line] until all the go routines have the chance to run till completion.

Surprisingly, the output is as follows if we run the increment function.

1
count: 98152 for 100000 cycles

What gives?? Turns out that despite the waitgroup, the increment count operations carried out by each go routine may not be successful. The increment operation is not atomic, meaning that it could be interrupted midway by another go routine that is working concurrently to increment the same count variable. Hence, you won’t have count that will reach 100,000 but less than that. At the high level, this is a race condition.

Two ways to handle — Mutex and Atomic Construct

In the first way, we use mutex to lock the shared variable such that only one go routine can increment the count at a time.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
func IncrementMutex() {
var count int
var wg sync.WaitGroup
m := sync.Mutex{} //1
for i := 0; i < 100000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
m.Lock()
defer m.Unlock()
count++
}()
}
wg.Wait()
fmt.Printf("count: %v for 100000 cycles with Mutex\n", count)
}

In //(1), we create a variable mutex with the sync.Mutex{} structure which will give a zero value mutex. This will by default be an unlocked mutex.

In the go routine, we lock the mutex before the increment of the count and unlock thereafter. And below is the output of running the function.

1
count: 100000 for 100000 cycles with Mutex

Now, let’s try to use the atomic construct which is available as part of the sync/atomic. Again we create a variant of the increment function:

1
2
3
4
5
6
7
8
9
10
11
12
13
func IncrementAtomic() {
var count int64 // sync atomic cannot work with int (1)
var wg sync.WaitGroup
for i := 0; i < 100000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
atomic.AddInt64(&count, 1) //(2)
}()
}
wg.Wait()
fmt.Printf("count: %v for 100000 cycles with Atomic\n", count)
}

Notice in (1), we had to use a int64 instead of the int since the atomic package cannot work with int. We can also use int32. Below are some functions showing the specific integer types that will work. So be sure to check the official documentations before using the sync/atomic package.

  • func AddInt32(addr *int32, delta int32) (new int32)
  • func AddInt64(addr *int64, delta int64) (new int64)
  • func AddUint32(addr *uint32, delta uint32) (new uint32)
  • func AddUint64(addr *uint64, delta uint64) (new uint64)
  • func AddUintptr(addr *uintptr, delta uintptr) (new uintptr)

In //(2), we evoke the AddInt64 function which requires the int64 variable as the first parameter and the delta value (also another int64) that we want to add to it.

https://medium.com/@naikofficial56/concurrency-with-golang-7d8e0c65ef85

https://medium.com/@naikofficial56/concurrency-with-golang-7d8e0c65ef85

Bond。James Bond。

A pointer is a special type that is used to reference a value. Understanding it better can help you write advanced code in Go.

Variables

Computer memory can be thought of as a sequence of boxes, placed one after another in a line. Each box is labeled with a unique number, which increments sequentially. The unique location number is called a memory address.

A variable is just a convenient, alphanumeric nickname for a piece of memory location assigned by the compiler. When you declare variables, you are given a memory location to use from the free memory available.

Pointers

A pointer value is the address of a variable. A pointer is thus the location at which a value is stored. With a pointer, we can read or update the value of a variable indirectly, without using or even knowing the variable’s name, if indeed it has a name.

In the following example,

  • The statement &x yields a pointer to an integer variable.
  • y := &x We say y points to x, or y contains the address of x.
  • The expression *y yields the value of that integer variable, which is 9 here.

Why is pointer useful?

“Pointers are used for efficiency because everything in Go is passed by value so they let us pass an address where data is held instead of passing the data’s value, to avoid unintentionally changing data, and so we can access an actual value in another function and not just a copy of it when we want to mutate it.”

Pointers Example

A copy of the value is sent to a function as an argument in pass-by-value. Any changes in the function will only impact the function’s variable; it will not update the original value outside of the function scope.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package main

import "fmt"

type User struct {
Name string
Age int
}

func (u User) String() string {
return fmt.Sprintf("User[Name: %s, Age: %d]", u.Name, u.Age)
}

// value receiver
func (u User) SetAge(age int) {
u.Age = age
fmt.Println(u)
}

func main() {
u := User{
Name: "John",
Age: 25,
}
u.SetAge(30)
fmt.Println(u)
}

Result:

1
2
User[Name: John, Age: 30]
User[Name: John, Age: 25]

Pass by pointer

In Go, everything is passed-by-value. We use pointers when we want to pass by reference and set the original value.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package main

import "fmt"

type User struct {
Name string
Age int
}

func (u User) String() string {
return fmt.Sprintf("User[Name: %s, Age: %d]", u.Name, u.Age)
}

// value receiver
func (u *User) SetAge(age int) {
u.Age = age
fmt.Println(u)
}

func main() {
u := User{
Name: "John",
Age: 25,
}
u.SetAge(30)
fmt.Println(u)
}

Result:

1
2
User[Name: John, Age: 30]
User[Name: John, Age: 30]

Go ahead , make my day.

In concurrent programming with Golang, the context package is a powerful tool to manage operations like timeouts, cancelation, deadlines, etc.

Among these operations, context with timeout is mainly used when we want to make an external request, such as a network request or a database request. I will show you how to use it to timeout a goroutine in this post.

Let’s first see a simple example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
package main

import (
"context"
"fmt"
"time"
)

func main() {
// Channel used to receive the result from doSomething function
ch := make(chan string, 1)

// Create a context with a timeout of 5 seconds
ctxTimeout, cancel := context.WithTimeout(context.Background(), time.Second*3)
defer cancel()

// Start the doSomething function
go doSomething(ctxTimeout, ch)

select {
case <-ctxTimeout.Done():
fmt.Printf("Context cancelled: %v\n", ctxTimeout.Err())
case result := <-ch:
fmt.Printf("Received: %s\n", result)
}
}

func doSomething(ctx context.Context, ch chan string) {
fmt.Println("doSomething Sleeping...")
time.Sleep(time.Second * 5)
fmt.Println("doSomething Wake up...")
ch <- "Did Something"
}

Okay, what are we doing here?

1. Timeout Context

Creating a timeout context is very easy. We use the function WithTimeout from the context package.

The following example defines a timeout context that will be canceled after 3 seconds.

1
2
ctxTimeout, cancel := context.WithTimeout(context.Background(), time.Second*3)
defer cancel()

Here, the WithTimeout takes a parent context and a duration parameter and returns a child context with a deadline set to the specified duration.

The parent context is returned by function Background. It is a non-nil, empty Context and is typically used by the main function as the top-level Context for incoming requests.

2. Long Waiting Function

We define a function that will execute in a separate goroutine. It will send the result to a predefined channel when finished.

1
2
3
4
5
6
func doSomething(ctx context.Context, ch chan string) {
fmt.Println("doSomething Sleeping...")
time.Sleep(time.Second * 5)
fmt.Println("doSomething Wake up...")
ch <- "Did Something"
}

The following is the predefined buffered channel.

1
ch := make(chan string, 1)

How to execute this function? It’s easy!

1
go doSomething(ctxTimeout, ch)

3. Waiting Orchestration

We wait for the result from the predefined result or from the timeout context channel in the main function.

The context will automatically signal to the ctxTimeout.Done channel if the timeout is reached. Otherwise, we will receive the result from the ch channel.

1
2
3
4
5
6
select {
case <-ctxTimeout.Done():
fmt.Printf("Context cancelled: %v\n", ctxTimeout.Err())
case result := <-ch:
fmt.Printf("Received: %s\n", result)
}

User Cases

To better understand the context, let’s look at some real-world use cases.

Mongo

1
2
3
4
5
6
opts := options.Client()
client, _ := mongo.Connect(context.TODO(), opts)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()

client.Database("db").Collection("collection").InsertOne(ctx, bson.M{"x": 1})

http.Get() timeout per request

1
2
3
4
5
6
7
8
9
10
11
12
13
14
ctx, cancel := context.WithTimeout(context.Background(), time.Microsecond*200)
defer cancel()

req, err := http.NewRequestWithContext(ctx, http.MethodGet, "https://google.com", nil)
if err != nil {
log.Fatalf("Error: %v", err)
return
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatalf("Error: %v", err)
return
}
fmt.Println(resp.StatusCode)

https://medium.com/geekculture/timeout-context-in-go-e88af0abd08d

We become the most familiar strangers.

One of the well known advantage of Go is its support of concurrency. Thanks for the goroutine and channel, it makes writing high performance concurrent code become much easier. It is also fun to implement different concurrent patterns. I personally use this pattern a lot in some crawler and downloading resource concurrently, hope it helps!

Let start with a simple go program:

1
2
3
4
5
6
7
8
9
10
// main() not waiting
func main() {
go task()
fmt.Println("main exiting...")
}

func task() {
time.Sleep(time.Second)
fmt.Println("task finished!")
}

There is a task() function just sleeping for 1 second to simulate a time consuming task. And we want it to run concurrently so add a go keyword in front of the function call to start a goroutine.

1
2
go run main.go
main exiting...

As expected, the program will exit immediately because the main function doesn’t wait the goroutine to finish.

To fix it, we can simply add a channel to block the main function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// main() waiting through channel
func main() {
ch := make(chan struct{})
go task(ch)

<-ch // block until receive something
fmt.Println("main exiting...")
}

func task(ch chan<- struct{}) {
time.Sleep(time.Second)
fmt.Println("task finished!")
ch <- struct{}{}
}

We create a non-buffered channel with empty struct{} type (since we just use the channel for signalling, the type doesn’t matters), after starting a goroutine, we immediately get the data from the channel by <-ch , it will block the main() until we can get something from the channel. When the task() finish, it will send an empty struct data to the ch , at the point, the main() can finally get something from the ch and continue to run.

1
2
3
go run main.go
task finished!
main exiting...

read and write to a non-buffered channel are blocking operations, it can be used to synchronize and communicate with different goroutines. Whereas buffered channel doesn’t block unless the buffer is full.

Beside using a channel, we can also use a WaitGroup to let the main function waits, it may also be more handy if there are multiple goroutines:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// main() waiting through waitGroup
func main() {
var wg sync.WaitGroup

for i := 0; i < 3; i++ {
wg.Add(1)
go func(i int) {
task(i)
wg.Done()
}(i)
}

fmt.Println("waiting...")
wg.Wait() // block until the WaitGroup counter becomes zero
fmt.Println("main exiting...")
}

func task(id int) {
time.Sleep(time.Second)
fmt.Println("task", id, "finished!")
}

What we need to do is quite simple, just declare a sync.WaitGroup variable. When starting a concurrent job, call the wg.Add(1) to increment the counter, when to job is done, call the wg.Done() to decrease the counter. And at the end of main() we need to call wg.Wait() , it will block until the counter become zero.

1
2
3
4
5
6
go run main.go
waiting...
task 0 finished!
task 1 finished!
task 2 finished!
main exiting...

The things becomes more interesting when we implement a worker pool pattern:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
// simple worker pool
func main() {
var wg sync.WaitGroup
pool := make(chan int, 5)

// create a worker keeps fetching the task and work concurrently
go func() {
for id := range pool {
task(id)
wg.Done()
}
}()

// add 5 tasks to the pool
for i := 1; i <= 5; i++ {
wg.Add(1)
pool <- i
fmt.Println("task", i, "added!")
}

close(pool)

fmt.Println("waiting...")
wg.Wait()
fmt.Println("main exiting...")
}

func task(id int) {
time.Sleep(time.Second)
fmt.Println("task", id, "finished!")
}

First we declare a buffered int channel pool , then we create a goroutine which will keep fetching the data from the pool and execute the task , this is the worker, if we want multiple workers, we can simply copy the goroutine code multiple times or wrap it with a loop. The worker is ready and blocking because nothing is inside the pool, now we need to feed some jobs to the pool. We can do that by a simple for loop and sending the loop index as a task id to the pool. As soon as the worker can get something from the pool , it will start working. Finally don’t forget to close() the channel if the sender(main) finish their work, otherwise the receiver(worker) will block there forever waiting for the new data comes in the channel and produce a deadlock.

1
2
3
4
5
6
7
8
9
10
11
12
13
go run main.go
task 1 added!
task 2 added!
task 3 added!
task 4 added!
task 5 added!
waiting...
task 1 finished!
task 2 finished!
task 3 finished!
task 4 finished!
task 5 finished!
main exiting...

In real world situation, it is more likely that we don’t know how many jobs we need to do or we just want to keep feeding the jobs unless we stop it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
// panic: send on closed channel
func main() {
// create a channel to capture SIGTERM, SIGINT signal
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGTERM, syscall.SIGINT)

var wg sync.WaitGroup
pool := make(chan int, 10)
id := 1

// create a worker keeps fetching the task and work concurrently
go func() {
for id := range pool {
task(id)
wg.Done()
}
}()

// adding task to the pool infinitely
go func() {
for {
wg.Add(1)
pool <- id
fmt.Println("task", id, "added!")
id += 1
time.Sleep(time.Millisecond * 500)
}
}()

<-quit // block until receive SIGTERM, SIGINT
close(pool)
wg.Wait()
fmt.Println("main exiting...")
}

func task(id int) {
time.Sleep(time.Second)
fmt.Println("task", id, "finished!")
}

To achieve that, we can remove the loop condition to make it infinite loop and wrap it inside a goroutine to make it non blocking. Then we also need a channel with os.Signal type to block the main() . The program will capture SIGTERM, SIGINT and send to the channel by signal.Notify()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
go run main.go
task 1 added!
task 2 added!
task 1 finished!
task 3 added!
task 4 added!
^Ctask 2 finished!
panic: send on closed channel
goroutine 34 [running]:
main.main.func2()
/Users/yk/Project/test/main.go:77 +0x59
created by main.main
/Users/yk/Project/test/main.go:74 +0x185
exit status 2

What!? panic…It’s because we close the pool channel after we received the quit signal but the producer goroutine still trying to send the job to the pool channel and panic happens. We also need a way to stop the producer goroutine.

Worker pool graceful shutdown with WaitGroup and Context:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
// worker pool graceful shutdown with waitGroup and context
func main() {
// create a channel to capture SIGTERM, SIGINT signal
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGTERM, syscall.SIGINT)

var wg sync.WaitGroup
pool := make(chan int, 10)
id := 1

// create a worker keeps fetching the task and work concurrently
go func() {
for id := range pool {
task(id)
wg.Done()
}
}()

// create a context which listening to SIGTERM, SIGINT
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGTERM, syscall.SIGINT)
defer stop()

// adding task to the pool infinitely, break until ctx.Done is closed
go func() {
for {
select {
case <-ctx.Done():
fmt.Println("stop filling the pool!")
close(pool)
return
default:
wg.Add(1)
pool <- id
fmt.Println("task", id, "added!")
id += 1
time.Sleep(time.Millisecond * 500)
}
}
}()

<-quit
wg.Wait()
fmt.Println("main exiting...")
}

func task(id int) {
time.Sleep(time.Second)
fmt.Println("task", id, "finished!")
}

Base on the previous version, we create a context ctx using the signal.NotifyContext() function, it will close the Done channel of the context when the corresponding SIGTERM, SIGINT arrives. In the producer goroutine, instead of a simple for loop, we also need to add a select{} statement. If we receive SIGTERM, SIGINT , it will notify to close the Done channel of the context and enter the case <- ctx.Done(): and exit the goroutine. Otherwise, it will just run the default case to feed jobs to the pool.

1
2
3
4
5
6
7
8
9
go run main.go
task 1 added!
task 2 added!
task 3 added!
task 1 finished!
^Cstop filling the pool!
task 2 finished!
task 3 finished!
main exiting...

Now when we send the SIGTERM, SIGINT to the program, it will first stop feeding more jobs to the pool and exit from the producer goroutine, then it waits for the worker goroutine to finsish all the existing task, then exit the main program.

https://medium.com/@yu-yk/graceful-shutdown-concurrent-go-program-with-waitgroup-and-context-33166210e170

I wish I could be more like you.

To shutdown go application gracefully, you can use open source libraries or write your own code.

Following are popular libraries to stop go application gracefully

https://github.com/tylerb/graceful
https://github.com/braintree/manners

In this article, I will explain how to write your own code to stop go app gracefully

Step 1

make channel which can listen for signals from OS. Refer os.Signal package for more detail. os.Signal package is used to access incoming signals from OS.

1
var gracefulStop = make(chan os.Signal)

Step 2

Use notify method of os.Signal to register system calls. For gracefully stop. we should listen to SIGTERM and SIGINT. signal.Notify method takes two arguments 1. channel 2. constant from syscall.

1
2
signal.Notify(gracefulStop, syscall.SIGTERM)
signal.Notify(gracefulStop, syscall.SIGINT)

Step 3

Now, We needs to create Go routine to listen channel “gracefulStop” for incoming signals. the following Go routine will block until it receives signals from OS. Now, you can perform clean up your stuff it can be closing DB connections, clearing buffered channels, write something to file, etc.. In the following code, I just put wait for 2 seconds. After completing your work you need to send a signal to OS by using os.Exit function. os.Exit function takes integer argument normally, it can be 0 or 1. 0 means clean exit without any error or problem. 1 means exit with an error or some issue. The exit status will help caller to identify the last status when process end.

1
2
3
4
5
6
7
go func() {
sig := <-gracefulStop
fmt.Printf("caught sig: %+v", sig)
fmt.Println("Wait for 2 second to finish processing")
time.Sleep(2*time.Second)
os.Exit(0)
}()

Full Source

For the demo, I use simple HTTP server which will display “Server is running” message on the browser.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
package main
import (
"os"
"os/signal"
"syscall"
"fmt"
"time"
"net/http"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w,"Server is running")
})
var gracefulStop = make(chan os.Signal)
signal.Notify(gracefulStop, syscall.SIGTERM)
signal.Notify(gracefulStop, syscall.SIGINT)
go func() {
sig := <-gracefulStop
fmt.Printf("caught sig: %+v", sig)
fmt.Println("Wait for 2 second to finish processing")
time.Sleep(2*time.Second)
os.Exit(0)
}()
http.ListenAndServe(":8080",nil)
}

https://kpbird.medium.com/golang-gracefully-stop-application-23c2390bb212
https://pkg.go.dev/syscall#SIGINT
http://husobee.github.io/golang/ecs/2016/05/19/ecs-graceful-go-shutdown.html4
https://pkg.go.dev/net/http#Server.Shutdown

If you have no critics, you will likely have no success.

Accepting and processing signals from the operating system is important for various use cases in applications.

While many server-side languages have complicated or tedious approaches to processing signals from the OS, with Golang applications it’s extremely intuitive. Golang’s in-built OS package provides an easy way to integrate and react to Unix signals from your Go application. Let’s see how.

The Premise

Let’s say we want to build a Golang application that when requested to shutdown prints a message saying, “Thank you for using Golang.” Let’s set up the main function that basically keeps doing some work until an exit command is provided to the application.

1
2
3
4
5
6
func main() {
for {
fmt.Println("Doing Work")
time.Sleep(1 * time.Second)
}
}

When you run this application and kill it by providing a kill signal from your OS (Ctrl + C or Ctrl + Z, in most cases), you may see an output similar to this one:

1
2
3
4
Doing Work
Doing Work
Doing Work
Process finished with exit code 2

Now, we would like to interpret this kill signal within the Golang application and process it to print out the required exit message.

Receiving Signals

We will create a channel to receive the command from the OS. The OS package provides the Signal interface to handle signals and has OS-specific implementations.

1
killSignal := make(chan os.Signal, 1)

To notify killSignal, we use the Notify function provided by the signal package. The first parameter takes a channel of a os.Signal, while the next parameters accept a list of OS signals we want to notify our channel with.

1
signal.Notify(killSignal, os.Interrupt)

Alternatively, we can notify our signal with specific commands using the syscall package.

1
signal.Notify(killSignal, syscall.SIGINT, syscall.SIGTERM)

In order to process the signal, we’ll make our main function block wait for the interrupt signal using the killSignal channel. On receiving a command from the OS, we’ll print the exit message and kill the application.

In order to process our work loop, let’s move that into a separate goroutine using an anonymous function.

1
2
3
4
5
6
go func() {
for {
fmt.Println("Doing Work")
time.Sleep(1 * time.Second)
}
}()

While the work function runs in a separate routine, the main function will wait for the killSignal and print the exit message before exiting.

1
2
<-killSignal
fmt.Println("Thanks for using Golang!")

The Code

With all the components put together, the final code is this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
package main

import (
"fmt"
"os"
"os/signal"
"time"
)

func main() {

killSignal := make(chan os.Signal, 1)
signal.Notify(killSignal, os.Interrupt)
go func() {
for {
fmt.Println("Doing Work")
time.Sleep(1 * time.Second)
}
}()
<-killSignal
fmt.Println("Thanks for using Golang!")
}

On running this, it keeps executing the work loop, and upon receiving an interrupt signal from the OS, it prints the required message and then exits.

1
2
3
4
Doing Work
Doing Work
Doing Work
Thanks for using Golang!

Conclusion

This simple example can be extrapolated to handle many real-life scenarios, such as gracefully shutting down servers and receiving commands in command-line applications.

https://betterprogramming.pub/using-signals-to-handle-unix-commands-in-golang-f09e9efb7769

强烈谴责, 坚决制裁

Coming from a PHP background, I instantly fell in love with Go after checking out the syntax and building small projects with it. What stood out most to me was the simplistic approach to lower level operations in Go, ranging from references and pointers to concurrency.

In this article, I will share my experience with concurrency with the aid of a small tool. The program fetches issues from the xkcd comics website and downloads each URL to build an offline JSON index. At the time of writing, there are over 2500 comics (URLs) to download.

Why concurrency?

Much has been written on the concurrency feature of Go so I’ll just share my experience on what I know it does for this project. As stated earlier, the xkcd website has over 2500 comics to download. To do this sequentially (that is, one at a time), it would take a long time (probably hours). If you happen to be very patient, there is still a very high chance the operation would fail due to factors such as the rate limiting feature on the website. It would not make any sense to download this resource sequentially (trust me, I tried).

By using a concurrent model, I was able to implement a Worker pool (to be explained later) to handle multiple HTTP requests at a time, keeping the connection alive and getting multiple results in a very short time.

What is this concurrent model? In Go, it is simply creating multiple goroutines to handle parts of the processes. A goroutine is Go’s way of achieving concurrency. They are functions that run concurrently with other functions. A goroutine can be compared to a lightweight thread (although it’s not a thread, as many goroutines can work on a single thread) which makes it lighter, faster and reliable. You can create as many as one million goroutines in one program. When two or more goroutines are running, they need a way to communicate with each other. That’s where channels come in.

To build this program, we will depend heavily on goroutines and channels, and to maintain the focus of this article, I will leave links below to explain these fundamental concepts better.

Planning and Design

The xkcd website features a JSON interface to allow external services use their API. We will be downloading the data from this interface to build our offline index.

Based on the above output, we can design our struct. This struct will be used as a model for what data we want to extract for JSON handling:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
package main

type Result struct {
Month string `json:"month"`
Num int `json:"num"`
Link string `json:"link"`
Year string `json:"year"`
News string `json:"news"`
SafeTitle string `json:"safe_title"`
Transcript string `json:"transcript"`
Alt string `json:"alt"`
Img string `json:"img"`
Title string `json:"title"`
Day string `json:"day"`
}

Fetching the comic

Now, before we jump into concurrency, we want to establish a function that serves the core purpose of the application — fetching the comic. The function has to be independent of our architecture and give room for re-usability across the program. I’ll explain each step below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
func fetch(n int) (*Result, error) {

client := &http.Client{
Timeout: 5 * time.Minute,
}

// concatenate strings to get url; ex: https://xkcd.com/571/info.0.json
url := strings.Join([]string{Url, fmt.Sprintf("%d", n), "info.0.json"}, "/")

req, err := http.NewRequest("GET", url, nil)

if err != nil {
return nil, fmt.Errorf("http request: %v", err)
}

resp, err := client.Do(req)
if err != nil {
return nil, fmt.Errorf("http err: %v", err)
}

var data Result

// error from web service, empty struct to avoid disruption of process
if resp.StatusCode != http.StatusOK {
data = Result{
}
} else {
if err := json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil, fmt.Errorf("json err: %v", err)
}
}

resp.Body.Close()

return &data, nil
}

First we create a custom HTTP client and set timeout to 5 seconds. After joining the strings using the strings package, we create a new request and send it using the previously created client. If the request is successful, we decode the data from JSON into our local struct. Then we close the response body and return a pointer to the struct.

Confirm it works

So far we have implemented the core structure of the application. Let’s run this part to ensure our code works as expected. Here’s the complete code so far:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
package main

import (
"encoding/json"
"fmt"
"log"
"net/http"
"strings"
"time"
)

type Result struct {
Month string `json:"month"`
Num int `json:"num"`
Link string `json:"link"`
Year string `json:"year"`
News string `json:"news"`
SafeTitle string `json:"safe_title"`
Transcript string `json:"transcript"`
Alt string `json:"alt"`
Img string `json:"img"`
Title string `json:"title"`
Day string `json:"day"`
}

const Url = "https://xkcd.com"


func fetch(n int) (*Result, error) {

client := &http.Client{
Timeout: 5 * time.Minute,
}

// concatenate strings to get url; ex: https://xkcd.com/571/info.0.json
url := strings.Join([]string{Url, fmt.Sprintf("%d", n), "info.0.json"}, "/")

req, err := http.NewRequest("GET", url, nil)

if err != nil {
return nil, fmt.Errorf("http request: %v", err)
}

resp, err := client.Do(req)
if err != nil {
return nil, fmt.Errorf("http err: %v", err)
}

var data Result

// error from web service, empty struct to avoid disruption of process
if resp.StatusCode != http.StatusOK {
data = Result{
}
} else {
if err := json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil, fmt.Errorf("json err: %v", err)
}
}

resp.Body.Close()

return &data, nil
}

func main() {
n := 200
result, err := fetch(n)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%v\n", result.Title)
}

Expected output for the title is “Bill Nye”, which is the title for issue #200. You can change the issue number to verify further.

Channels Setup

As stated earlier, we will be creating a Worker pool to concurrently handle the operations. To do that, we have to set up buffered channels. A buffered channel is simply a channel with a specified capacity. With a buffered channel, send operations are blocked when the buffer is full and receive operations are blocked when the buffer is empty. We need this feature because in a Worker Pool, we assign multiple jobs to a number of workers and we want to ensure they are handled in an organized way. An example:

1
ch := make(chan int, 6)

If we have 6 workers in our worker pool, this buffered channel will ensure at every point in time, at most 6 jobs are given to the 6 workers.

1
2
3
4
5
6
7
8
9
10
var jobs = make(chan Job, 100)
var results = make(chan Result, 100)
var resultCollection []Result

func allocateJobs(noOfJobs int) {
for i := 0; i <= noOfJobs; i++ {
jobs <- Job{i+1}
}
close(jobs)
}

After creating the buffered channels and setting up the final results variable, we created a function to allocate jobs to the jobs channel. As expected, this function will block when i = 100, which means no new job will be added until a job has been received by the worker. After all available jobs have been allocated, the jobs channel will be closed to avoid further writes.

Create the Worker pool

A worker pool maintains multiple threads (or in our case, goroutines) and waits for tasks (jobs) to be assigned to them. For example, let’s say we have 1000 jobs. We create a worker pool which spawns 100 workers. If the jobs channel is buffered at 100-capacity, the workers takes in the 100 jobs, and as some jobs are done processing, new jobs are being allocated, which goes to the workers, and so on.

Our worker pool will make use of Go’s WaitGroup, a synchronization primitive (type) that tells the main goroutine to wait for a collection of goroutines to finish.

Here’s a simple implementation for this project:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
func worker(wg *sync.WaitGroup) {
for job := range jobs {
result, err := fetch(job.number)
if err != nil {
log.Printf("error in fetching: %v\n", err)
}
results <- *result
}
wg.Done()
}

func createWorkerPool(noOfWorkers int) {
var wg sync.WaitGroup
for i := 0; i <= noOfWorkers; i++ {
wg.Add(1)
go worker(&wg)
}
wg.Wait()
close(results)
}

In the code, we first define a worker function. The worker gets a job from the allocated jobs channel, processes the result, and passes the value to the results channel. In the createWorkerPool function, we use the WaitGroup primitive to set up a Worker pool. The wg.Add(1) call increments the WaitGroup counter. The counter must be zero if the program is to stop running (which is why we have the wg.Wait() call). The wg.Done() call in the worker function decrements the counter and if all is done, the control is returned to the main goroutine and the results channel is closed to prevent further writes.

Get the results

The results are being added to the results channel we created. However, it is buffered and can only accept 100 at a time. We need a seperate goroutine to retrieve the results and give room for other results. Here’s how we do that:

1
2
3
4
5
6
7
8
9
func getResults(done chan bool) {
for result := range results {
if result.Num != 0 {
fmt.Printf("Retrieving issue #%d\n", result.Num)
resultCollection = append(resultCollection, result)
}
}
done <- true
}

If the result from the results channel is valid, we append it to the results collection. We have a boolean channel named “done”; we will use it to check if all the results have been collated.

Putting it all together

We have a bunch of functions, variables and types declarations, but how do we put them together? Which function is executed first and why? In this last section, we will see how it all comes together.

Here’s the code for the main function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
func main() {
// allocate jobs
noOfJobs := 3000
go allocateJobs(noOfJobs)

// get results
done := make(chan bool)
go getResults(done)

// create worker pool
noOfWorkers := 100
createWorkerPool(noOfWorkers)

// wait for all results to be collected
<-done

// convert result collection to JSON
data, err := json.MarshalIndent(resultCollection, "", " ")
if err != nil {
log.Fatal("json err: ", err)
}

// write json data to file
err = writeToFile(data)
if err != nil {
log.Fatal(err)
}
}

func writeToFile(data []byte) error {
f, err := os.Create("xkcd.json")
if err != nil {
return err
}
defer f.Close()

_, err = f.Write(data)
if err != nil {
return err
}
return nil
}

First, we allocate jobs. We use 3000 because at the time of writing, xkcd has over 2500 comic issues, and we want to make sure we get all of them.

Exercise: Create a small program that tells you exactly how many issues are on the xkcd website, to remove the need for an estimate.

  • To allocate, we start a goroutine. Note that this goroutine will block once 100 jobs have been added to the channel. It will wait for another goroutine to read the jobs channel.

  • We start a goroutine to collect the results. Why do this now? Well, the results channel is currently empty. Trying to read data from it will block the routine, until data has been written to the channel.

  • That makes it 2 goroutines blocked and waiting for read and write operations.

  • We create the Worker pool. This spawns many workers (100 in our example) and they read from the jobs channel, and write to the results channel.

  • That begins to satisfy the 2 blocked goroutines we had earlier.

  • We get the value of the “done” boolean channel to ensure all results have been collected.

  • Then we convert to JSON and write the data to file.

Complete Code

Here’s a complete code for the project:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
package main

import (
"encoding/json"
"fmt"
"log"
"net/http"
"os"
"strings"
"sync"
"time"
)

type Result struct {
Month string `json:"month"`
Num int `json:"num"`
Link string `json:"link"`
Year string `json:"year"`
News string `json:"news"`
SafeTitle string `json:"safe_title"`
Transcript string `json:"transcript"`
Alt string `json:"alt"`
Img string `json:"img"`
Title string `json:"title"`
Day string `json:"day"`
}
const Url = "https://xkcd.com"


func fetch(n int) (*Result, error) {

client := &http.Client{
Timeout: 5 * time.Minute,
}

// concatenate strings to get url; ex: https://xkcd.com/571/info.0.json
url := strings.Join([]string{Url, fmt.Sprintf("%d", n), "info.0.json"}, "/")

req, err := http.NewRequest("GET", url, nil)

if err != nil {
return nil, fmt.Errorf("http request: %v", err)
}

resp, err := client.Do(req)
if err != nil {
return nil, fmt.Errorf("http err: %v", err)
}

var data Result

// error from web service, empty struct to avoid disruption of process
if resp.StatusCode != http.StatusOK {
data = Result{
}
} else {
if err := json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil, fmt.Errorf("json err: %v", err)
}
}

resp.Body.Close()

return &data, nil
}

type Job struct {
number int
}

var jobs = make(chan Job, 100)
var results = make(chan Result, 100)
var resultCollection []Result

func allocateJobs(noOfJobs int) {
for i := 0; i <= noOfJobs; i++ {
jobs <- Job{i+1}
}
close(jobs)
}

func worker(wg *sync.WaitGroup) {
for job := range jobs {
result, err := fetch(job.number)
if err != nil {
log.Printf("error in fetching: %v\n", err)
}
results <- *result
}
wg.Done()
}

func createWorkerPool(noOfWorkers int) {
var wg sync.WaitGroup
for i := 0; i <= noOfWorkers; i++ {
wg.Add(1)
go worker(&wg)
}
wg.Wait()
close(results)
}

func getResults(done chan bool) {
for result := range results {
if result.Num != 0 {
fmt.Printf("Retrieving issue #%d\n", result.Num)
resultCollection = append(resultCollection, result)
}
}
done <- true
}

func main() {
// allocate jobs
noOfJobs := 3000
go allocateJobs(noOfJobs)

// get results
done := make(chan bool)
go getResults(done)

// create worker pool
noOfWorkers := 100
createWorkerPool(noOfWorkers)

// wait for all results to be collected
<-done

// convert result collection to JSON
data, err := json.MarshalIndent(resultCollection, "", " ")
if err != nil {
log.Fatal("json err: ", err)
}

// write json data to file
err = writeToFile(data)
if err != nil {
log.Fatal(err)
}
}

func writeToFile(data []byte) error {
f, err := os.Create("xkcd.json")
if err != nil {
return err
}
defer f.Close()

_, err = f.Write(data)
if err != nil {
return err
}
return nil
}

https://blog.devgenius.io/concurrency-with-sample-project-in-golang-297400beb0a4

岁月这东西,总是要按时带走它要带走的部分。

p 标签内容空时

1
2
3
4
p:empty:before {
color: #CCC;
content: "Don't make me empty!";
}

p 标签起始增加内部内容

1
2
3
4
p:before { 
display: block;
content: 'Some';
}

p 标签结尾增加内部内容

1
2
3
4
p:after { 
display: block;
content: 'Some';
}

将a标签的title属性添加到a的text

1
2
3
4
5
a:before {
content: attr(title) ": ";
}

<a title="A web design community." href="https://css-tricks.com">CSS-Tricks</a>

增加 unicode 字符

https://unicode-table.com/cn/2665/

1
2
3
4
5
p::after {
content: "\2665";
color: red;
font-size: 23px;
}

tooltips实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
a {
color: #900;
text-decoration: none;
}

a:hover {
color: red;
position: relative;
}

a[title]:hover::after {
content: attr(title);
padding: 4px 8px;
color: #333;
position: absolute;
left: 0;
top: 100%;
white-space: nowrap;
z-index: 20;
border-radius: 5px;
box-shadow: 0px 0px 4px #222;
background-image: linear-gradient(#eeeeee, #cccccc);
}

标题前后加对称图标

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
h2 {
text-align: center;
}
h2:before, h2:after {
font-family: "Some cool font with glyphs", serif;
content: "\00d7"; /* Some fancy character */
color: #c83f3f;
}
h2:before {
margin-right: 10px;
}
h2:after {
margin-left: 10px;
}

实现面包屑效果

1
2
3
4
5
6
7
8
9
10
11
12
li {
float: left;
margin-left: 12px;
list-style-type: none;
}

li:after {
content: "// ";
position: relative;
left: 3px;
}

https://css-tricks.com/pseudo-element-roundup/#top-of-site