0%

I'm still here.

When it comes to building robust and efficient web applications, a well-structured database is crucial. Laravel simplifies database management with its elegant Object-Relational Mapping (ORM) tool called Eloquent. In this blog post, we’ll delve into an advanced database optimization technique: using indexes on virtual columns in MySQL, combined with Laravel Eloquent.

Understanding virtual columns

Virtual columns, also known as generated columns, are columns in a database table that don’t physically store data but instead derive their values from other columns or expressions. These columns are computed on-the-fly when queried, providing a convenient way to manipulate and transform data without altering the actual table structure.

In MySQL, you can create virtual columns using expressions, such as mathematical calculations or string manipulations, and then index these columns for improved query performance.

Benefits of virtual column indexing

Indexing virtual columns can offer several advantages:

  • Faster Query Performance: Indexes on virtual columns allow MySQL to quickly locate the relevant rows, reducing the time needed to retrieve data.

  • Simplified Data Transformation: Virtual columns enable you to perform complex data transformations within the database, reducing the need for application-level data manipulation.

Using virtual columns in laravel eloquent

Let’s walk through the steps to use virtual columns and indexes in Laravel Eloquent:

Define a virtual column in a migration

To create a virtual column, you need to define it in a Laravel migration using the storedAs method. For example, let’s create a virtual column that calculates the total price based on the quantity and unit price:

1
2
3
4
5
6
7
8
9
10
11
12
13
public function up()
{
Schema::create('products', function (Blueprint $table) {
$table->id();
$table->string('name');
$table->decimal('unit_price', 8, 2);
$table->integer('quantity');
$table->decimal('total_price', 8, 2)
->storedAs('unit_price * quantity') // Define the virtual column
->index(); // Index the virtual column
$table->timestamps();
});
}

Use the virtual column in eloquent models

Once you’ve defined the virtual column, you can use it in your Eloquent models like any other column. Laravel Eloquent will handle the virtual column seamlessly:

1
2
3
4
5
6
7
class Product extends Model
{
protected $fillable = ['name', 'unit_price', 'quantity'];

// You can access the virtual column as if it were a regular one
protected $appends = ['total_price'];
}

Now, you can access the total_price virtual column on your Product model instances as if it’s a standard attribute:

1
2
$product = Product::find(1);
echo $product->total_price; // Access the virtual column

Benefit from indexing

By adding the ->index() method when defining the virtual column in your migration, you’ve instructed MySQL to create an index on it. This index will significantly improve query performance when filtering, sorting, or searching for records based on the virtual column.

virtualAs vs storedAs

In Laravel Eloquent, both storedAs and virtualAs are methods used to work with virtual columns, but they serve slightly different purposes. Let’s explore the differences between these two methods.

The storedAs method is used to define a virtual column in a database table, and it specifies how the virtual column’s values are calculated and stored within the table. Here are the key characteristics of storedAs:

  • Stored Value: When you use storedAs, the calculated value for the virtual column is physically stored in the database table. This means that the result of the expression or formula you provide in storedAs is computed when a record is inserted or updated, and the result is saved in the table.

  • Indexing: You can index virtual columns created with storedAs. Indexing can significantly improve query performance when filtering or searching based on the virtual column.

  • Data Integrity: Since the value is stored in the table, it’s subject to data integrity constraints. If the formula involves other columns, changes to those columns will trigger updates to the virtual column.

Example:

1
2
3
$table->decimal('total_price', 8, 2)
->storedAs('unit_price * quantity')
->index();

In this example, the total_price column is a virtual column, and its value is calculated as the product of unit_price and quantity. The result is stored in the total_price column in the table, and an index is created on it.

On the other hand, the virtualAs method is used to define a virtual column without physically storing its values in the table. Here are the key characteristics of virtualAs:

  • Computed On-the-Fly: When you use virtualAs, the virtual column’s value is computed on-the-fly whenever you query the database. It’s not physically stored in the table.

  • No Indexing: Since there’s no physical storage, you can’t create an index directly on a column defined with virtualAs. This can result in slower query performance when filtering or sorting by the virtual column.

  • bData Integrity: There are no data integrity constraints associated with virtualAs because no data is stored. Changes to other columns won’t affect the virtual column since it’s always calculated dynamically.

Example:

1
$table->virtualAs('unit_price * quantity', 'total_price');

In this example, the total_price column is also a virtual column, but its value is calculated on-the-fly using the provided expression whenever you access it in a query. No physical storage or indexing is involved.

In summary, the choice between storedAs and virtualAs depends on your specific use case. If you need to frequently query and filter by the virtual column while maintaining data integrity, storedAs with indexing is a good option. If you only need the calculated value occasionally and don’t require data storage or indexing, virtualAs is more appropriate.

Conclusion

Leveraging virtual columns with indexing in MySQL, coupled with Laravel Eloquent, can greatly enhance the efficiency and maintainability of your database-driven web applications. By offloading complex calculations to the database engine and optimizing queries with indexes, you’ll be well on your way to building high-performance applications that scale with ease.

So, next time you’re working on a Laravel project and need to perform calculations on your data, consider using virtual columns and indexing to boost your application’s database performance. Your users will thank you for the snappy response times, and your database will appreciate the reduced workload.

https://www.yellowduck.be/posts/combining-virtual-columns-with-indexes-in-laravel-eloquent

I gave up, x
Greetings humans (I am not a bot 😶) hope you are interfacing properly?

Follow me on a journey as I show you a simple way of continuous deployment on a Laravel project using git and some other things (just read on).

For our tutorial, we’re going to need a few things before we proceed:

  • A Laravel project
  • Some version of git (Github would be used for this tutorial, but the process is just about the same)
  • Access to a server (You can test locally tho but it won’t be the same)
  • Eyes
  • Fingers
  • A brain
  • ok I’ll stop now…
    Let us start from the beginning, a laravel project

Very beautiful framework, (no regrets about cheating on asp.net [this is not a confession 😶]).

The first thing we want to do on our project is use composer to install symfony/process to our project, we are going to need this package later.

You can do that by running this simple command at the root of your project

1
$  composer require symfony/process

This would load the required loadables to your composer.json file, in order to add the package to your project you just simply run this comman

1
$ composer update

That should add the package to your vendors folder and generate the required classes and so on.

Here is a link to the Symfony docs where you can read up more about that.

After this is done, we are going to need a shell script that is going to hold the sauce to our magic.

This is how our shell script looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/bin/sh
# activate maintenance mode
php artisan down
# update source code
git pull
# update PHP dependencies
composer install --no-interaction --no-dev --prefer-dist
# --no-interaction Do not ask any interactive question
# --no-dev Disables installation of require-dev packages.
# --prefer-dist Forces installation from package dist even for dev versions.
# update database
php artisan migrate --force
# --force Required to run when in production.
# stop maintenance mode
php artisan up

We can call this file deploy.sh

The truth is that this is just a template, you can modify this script to suit whatever needs you might have.

Now you have to make this script executable

1
$ sudo chmod +x deploy.sh

Depending on your production environment, this method is very risky so if you’re one of those “safety” freaks, just clap for me and move on…

But if you are one with the force (Linux) please proceed, it only get’s interesting from here.

Now we have our script ready, we would need to prepare for our git webhook.

On GitHub, on your repository page, select the Settings tab, then Webhooks in the left navigation. Or go directly to the URL:

1
https://github.com/<your account>/<your repository>/settings/hooks

Click Add Webook:

Now we would need to add this webhook to our project (this is where it gets fun)

Firstly we need to add our secret to the project or in Layman’s terms we need make or project understand that there is a secret that a url needs before we proceed.

In config/app.php, add this line:

1
'deploy_secret' => env('APP_DEPLOY_SECRET'),

In your .env file add your webhook secret:

1
APP_DEPLOY_SECRET=changemenoworfacetheconsequences

Now we’re done with the manual part, let’s write some codes

We need to make a controller which would house our logic and process for making our deploy process run. Now let’s make our controller…

1
$ php artisan make:controller DeployController

I’m just going to call this controller DeployController for simplicity sake.

Then we would add all our code, don’t worry I’ll explain most of it. At the end our controller should look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Symfony\Component\Process\Process;
class DeployController extends Controller
{
public function deploy(Request $request)
{
$githubPayload = $request->getContent();
$githubHash = $request->header('X-Hub-Signature');
$localToken = config('app.deploy_secret');
$localHash = 'sha1=' . hash_hmac('sha1', $githubPayload, $localToken, false);
if (hash_equals($githubHash, $localHash)) {
$root_path = base_path();
$process = new Process('cd ' . $root_path . '; ./deploy.sh');
$process->run(function ($type, $buffer) {
echo $buffer;
});
}
}
}

Before I proceed, clap for me, it’s not easy to indent your code here on medium.com.
The code above does the following:

  • Makes sure the post request it coming from GitHub using the X-Hub-Signature unique to github. You can remove this particular verification if you’re feeling adventurous but I recommend you keep it.
    You can always refer to the git version control system documentation you are using for their own X-Signature

  • Makes sure the post request is coming from your github repo by verifying your deploy secret (in a production environment there are other checks before and after this, so don’t bother much about how flimsy the security might look)

  • Uses the symfony process to run the deploy script at the root of the project path in a shell environment

That’s the basic gist about the code above, let’s proceed to adding a route to the webhook we added to github (or whatever proper sounding English that fits, English is hard)

Navigate to route/web.php in your project and add this line

1
Route::post('deploy', 'DeployController@deploy');

The method for this route has to always be a post method because github sends only post requests to webhooks, so you can call this another check if you want.

Secondly after this, to prevent CSRF token validation errors, we add the route above to our excepted route in the Middleware/VerifyCsrfToken.php

Which when done should look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<?php
namespace App\Http\Middleware;
use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware;
class VerifyCsrfToken extends Middleware
{
/**
* The URIs that should be excluded from CSRF verification.
*
* @var array
*/
protected $except = [
'/deploy',
];
}

After this, on your server change the unix group of your project folder to www-data. This is necessary to allow the shell script to run in peace (allow the www-data user to update the project folder) this can be simply done by:

1
$ sudo chgrp -R www-data .

Then after all this, you are done.

With this now you have successfully set up a simple autodeployment (coughs Continuous Deployment) process on your project using git (while Jenkins and Travis are having some alone time).

If you’ve made it this far, Congratulations!!! You made it through series of bad jokes and hopefully learnt something, please a round of applause for yourself (I mean that clap button 😐)

https://medium.com/@gmaumoh/laravel-how-to-automate-deployment-using-git-and-webhooks-9ae6cd8dffae

不要太在意别人说的话,因为他们有嘴,但不一定有脑子

In more advanced projects you will soon realize that relation lists/forms and in general the whole RelationController is lacking funcionality. One of those things that are missing are filters in the relation list. But fear not, you can render lists and forms manually and then you can add filters to it. The best place to start with manual lists are these two tutorials: https://octobercms.com/support/article/ob-21
https://octobercms.com/support/article/ob-20

But they only cover how to make a simple list rendered by hand in a partial and without filters. What I will cover in this article is how to do the same but using ListController (to render the list with filters for us automatically).

OctoberCMS - Relation lists with filters [HOWTO]
OctoberCMS | Date: Sep 23, 2018

In more advanced projects you will soon realize that relation lists/forms and in general the whole RelationController is lacking funcionality. One of those things that are missing are filters in the relation list. But fear not, you can render lists and forms manually and then you can add filters to it. The best place to start with manual lists are these two tutorials: https://octobercms.com/support/article/ob-21
https://octobercms.com/support/article/ob-20

But they only cover how to make a simple list rendered by hand in a partial and without filters. What I will cover in this article is how to do the same but using ListController (to render the list with filters for us automatically).

Once you know the formula this is a pretty easy process. But I recon that getting there by yourself can be a painfull process (it was for me). After seeing the tutorial videos you would probably dive into the Behaviours and ListController to see how October does it because the documentation is still lacking. But it also has some good sides, you need to consciously code your plugins, you can’t just paste random code form internet and make a plugin out of it. In other words, your code quality will be by default higher than code for other leading CMS platforms :)

OctoberCMS - Relation lists with filters [HOWTO]
OctoberCMS | Date: Sep 23, 2018

In more advanced projects you will soon realize that relation lists/forms and in general the whole RelationController is lacking funcionality. One of those things that are missing are filters in the relation list. But fear not, you can render lists and forms manually and then you can add filters to it. The best place to start with manual lists are these two tutorials: https://octobercms.com/support/article/ob-21
https://octobercms.com/support/article/ob-20

But they only cover how to make a simple list rendered by hand in a partial and without filters. What I will cover in this article is how to do the same but using ListController (to render the list with filters for us automatically).

Once you know the formula this is a pretty easy process. But I recon that getting there by yourself can be a painfull process (it was for me). After seeing the tutorial videos you would probably dive into the Behaviours and ListController to see how October does it because the documentation is still lacking. But it also has some good sides, you need to consciously code your plugins, you can’t just paste random code form internet and make a plugin out of it. In other words, your code quality will be by default higher than code for other leading CMS platforms :)

But let’s get to the point. Let’s say we have Order controller and model, then we have Product controller and model, both are glued together by Many to Many relationship with some pivot data. You will soon realize that when adding products to order manually (after reaching about 100 products) it gets really annoying to scroll through list of products to get those you want to add to order. Yeah you have search but sometimes you don’t remeber the name, or you just want to browse given product category or color or anything like that. List filter would come handy here. Below are the steps needed to take to achieve that:

  • Add custom button to relation toolbar to have Ajax handler that will render the custom list. We will remove the default Add Product button(rendered by RelationController) and put a custom Add Product button.
  • We need custom Products list widget to display list of products
  • We need to attach filter to Products list widget
  • As an option we need to use a query scope to show, lets say only active products.

STEP 1. Edit Controller/Orders/config_relation.yaml Your toolbarButtons declaration for products relation probably looks like that:

1
toolbarButtons: add | remove

Like I said before we want to use custom add button. Lets swap the default add button for a custom button. I will call it “productsadd” The line will look like this:

1
toolbarButtons: productsadd | remove

Now we need to put the code for the custom button somewhere, October is really making this easy for us. The only thing we need to do is to create a file called _relation_button_productsadd.htm in Controller/orders directory.
This is how my file looks like:

1
2
3
4
5
6
<button
class="btn btn-secondary oc-icon-plus"
data-control="popup"
data-handler="onAddProduct"
data-size="large">
Add Product

Two most important lines here are:

1
data-control="popup"

This will open the relation list in the modal window.

1
data-handler="onAddProduct"

This is our Ajax Handler to display custom list. We need to add a function in our Orders controller to handle it. Lets go to Controllers/Orders.php, but before we will add this action we should do some other things too. I will put it all in one file with comments explaning the lines of code we will add. Bear in mind that this is not the complete Orders.php controller file. Those are mostly only the lines of code you need to add.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[...]
# List and Filter widgets variables, name them as you want :)
protected $productsListWidget;
protected $productsFilterWidget;

[...]
public function construct()
{
parent::construct();
BackendMenu::setContext('Redmarlin.ShopClerk', 'Shop', 'Orders');
#We need to create Products List Widget
$this->productsListWidget = $this->createProductListWidget();
}

[...]
# This is Ajax Handler invoked when clickin on "Add Product" button. What it does is to just assign
# previously created widgets to variables that are accessible from partials.
public function onAddProduct() {
$this->vars['ProductListWidget'] = $this->ProductListWidget;

#Variable necessary for the Filter funcionality
$this->vars['ProductFilterWidget'] = $this->ProductFilterWidget;

#Process the custom list partial, The name you choose here will be the partials file name
return $this->makePartial('product_custom_list');

}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# Ahhh finally there, the most important part, here we declare all the necessary
# things to make List widget with filters happen.
protected function createProductListWidget () {

#First we need config for the list, as described in video tutorials mentioned at the beginning.
# Specify which list configuration file use for this list
$config = $this->makeConfig('$/redmarlin/shopclerk/models/product/columns_relation.yaml');

# Specify the List model
$config->model = New \Redmarlin\ShopClerk\Models\Product ;

# Lets configure some more things like report per page and lets show checkboxes on the list.
# Most of the options mentioned in https://octobercms.com/docs/backend/lists#configuring-list # will work
$config->recordsPerPage = '30';
$config->showCheckboxes = 'true';

# Here we will actually make the list using Lists Widget
$widget_product = $this->makeWidget('Backend\Widgets\Lists', $config);

#For the optional Step 4. Alter product list query before displaying it.
# We will bind to list.extendQuery event and assign a function that should be executed to extend
# the query (the function is defined in this very same controller file)
$widget_product->bindEvent('list.extendQuery', function ($query) {
$this->productExtendQuery($query);
});

# Step 3. The filter part, we must define the config, really similar to the Product list widget config
# Filter configuration file
$filterConfig = $this->makeConfig('$/redmarlin/shopclerk/models/product/filter_relation.yaml');

# Use Filter widgets to make the widget and bind it to the controller
$filterWidget = $this->makeWidget('Backend\Widgets\Filter', $filterConfig);
$filterWidget->bindToController();

# We need to bind to filter.update event in order to refresh the list after selecting
# the desired filters.
$filterWidget->bindEvent('filter.update', function () use ($widget_product, $filterWidget) {
return $widget_product->onRefresh();
});

#Finally we are attaching The Filter widget to the Product widget.
$widget_product->addFilter([$filterWidget, 'applyAllScopesToQuery']);

$this->productFilterWidget = $filterWidget;

# Dont forget to bind the whole thing to the controller
$widget_product->bindToController();

#Return the prepared widget object
return $widget_product;

}

# Function that will extend default Product query and only show active products

public function productExtendQuery($query)
{
$query->where('status','active');
}

That is basically all that is needed in the Orders controller. But we are still a few things short. We need a partial that we have declared in our Ajax Handler (onAddProduct) - “product_custom_list”.
Create a file _product_custom_list.htm in Controllers/orders/ directory. The code in this file is basically copied from the RelationController partial for managing pivot relation (modules/backend/behaviors/relationcontroller/partials/_manage_pivot.htm). If you need code for other relation type just copy appropriate file from RelationController dir and then modify it to suit your needs. In the first line, by using the data-request-data we are telling relation controller what relation we are displaying here. Apart from that we are rendering Filter and List widget.

I have also customized a few other things here like: removed search widget and removed parts I wont use (ie the list will be always rendered with checkboxes).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
<div id="relationManagePopup" data-request-data="_relation_field: 'product'">
<?= Form::open() ?>
<div class="modal-header">
<button type="button" class="close" data-dismiss="popup">×</button>
<h4 class="modal-title">Product Selection List</h4>
</div>
<div class="list-flush">
<?php if ($productFilterWidget): ?>
<?= $productFilterWidget->render() ?>
<?php endif ?>
</div>

<?= $productListWidget->render() ?>

<div class="modal-footer">
<button
type="button"
class="btn btn-primary"
data-control="popup"
data-handler="onRelationManageAddPivot"
data-size="huge"
data-dismiss="popup"
data-stripe-load-indicator>
<?= e(trans('backend::lang.relation.add_selected')) ?>
</button>
<button
type="button"
class="btn btn-default"
data-dismiss="popup">
<?= e(trans('backend::lang.relation.cancel')) ?>
</button>
</div>
<?= Form::close() ?>
</div>
<script>
setTimeout(
function(){ $('#relationManagePivotPopup input.form-control:first').focus() },
310
)
</script>

If you need search widget you need to add it the same way we added Filter widget.

With this we can render Products list with working filters in the Orders update/create screen as relation. After choosing Product from the list a pivot create form will be shown.

But there is still a tiny detail we should take care of. When using group type filter the dropdown list will be shown below our modal window. In other words it will be invisible!!! You can fix it with just one line of css. You need to change z-index of “control-popover” class to show it above the modal window. something like:

1
2
3
div.control-popover {
z-index: 9999;
}

will do. Then I simply injected css file from plugin/assets/backend_mods.css into Orders controller. But you can inject it globally in the Plugin.php. This way you don’t need to add it in every controller.
That’s it, I hope you’ll find this tutorial helpful. Let me know if I got something wrong or something is not clear enough.

https://redmarlin.net/blog/category/octobercms

我们素未谋面,但我希望你平平安安

安装nodejs

安装Nativefier

1
2
3
npm install nativefier -g
//macOS下可能会发生访问某个目录权限不够的问题,因此以管理员身份执行
sudo npm install nativefier -g

参数介绍

Version

1
-v, --version

[icon]

1
-i, --icon <path>

[strict-internal-urls]

1
--strict-internal-urls

Disables base domain matching when determining if a link is internal. Only the --internal-urls regex and login pages will be matched against, so app.foo.com will be external to www.foo.com unless it matches the --internal-urls regex.

生成桌面应用

1
2
3
4
//nativefier --help 帮助文档
nativefier --name "Teambition" "https://www.teambition.com"
//针对M1版本mac可选arm64
nativefier -n "Teambition" -a "arm64" "https://www.teambition.com"

https://github.com/nativefier/nativefier/

一定要爱着点什么,恰似草木对光阴的钟情

第一步,安装 HomeBrew

1
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

第二步,更新 zsh、git

1
2
3
4
5
6
brew install zsh

==> Downloading https://homebrew.bintray.com/bottles/zsh-5.7.1.high_sierra.bottle.tar.gz
######################################################################## 100.0%
==> Pouring zsh-5.7.1.high_sierra.bottle.tar.gz
/usr/local/Cellar/zsh/5.7.1: 1,515 files, 13.3MB

第三步,切换至 zsh 并安装 oh-my-zsh

查看当前使用的 shell

1
2
3
echo $SHELL

/bin/bash

查看安装的 shell

1
2
3
4
5
6
7
8
cat /etc/shells

/bin/bash
/bin/csh
/bin/ksh
/bin/sh
/bin/tcsh
/bin/zsh

切换为 zsh

1
chsh -s /bin/zsh

接下来安装 oh-my-zsh

1
sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"

安装完成后,终端展示如下内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
  ____  / /_     ____ ___  __  __   ____  _____/ /_  
/ __ \/ __ \ / __ `__ \/ / / / /_ / / ___/ __ \
/ /_/ / / / / / / / / / / /_/ / / /_(__ ) / / /
\____/_/ /_/ /_/ /_/ /_/\__, / /___/____/_/ /_/
/____/ ....is now installed!


Please look over the ~/.zshrc file to select plugins, themes, and options.

p.s. Follow us at https://twitter.com/ohmyzsh.

p.p.s. Get stickers and t-shirts at http://shop.planetargon.com.

第四步,配置 oh-my-zsh

打开 oh-my-zsh 配置文件

1
2
3
4
# 打开 zshrc 文件进行编辑,也可以使用 vim 编辑器
open ~/.zshrc
# 本人使用的是 vs code
open ~/.zshrc -a Visual\ Studio\ Code

主题
配置项 ZSH_THEME 即为 oh-my-zsh 的主题配置,oh-my-zsh 的 GitHub Wiki 页面提供了 主题列表
当设置为 ZSH_THEME=random 时,每次打开终端都会使用一种随机的主题。

插件

1
plugins=(git osx autojump zsh-autosuggestions zsh-syntax-highlighting)

注意:其中 zsh-autosuggestions 和 zsh-syntax-highlighting 是自定义安装的插件,需要用 git 将插件 clone 到指定插件目录下:

1
2
3
4
5
# 自动提示插件
git clone git://github.com/zsh-users/zsh-autosuggestions $ZSH_CUSTOM/plugins/zsh-autosuggestions
# 语法高亮插件
git clone git://github.com/zsh-users/zsh-syntax-highlighting $ZSH_CUSTOM/plugins/zsh-syntax-highlighting

需要其他插件的可以自行安装,如果插件未安装,开启终端的时候会报错,按照错误提示,安装对应的插件即可。

更新配置

1
source ~/.zshrc

问题

更新完 zsh 说我目录权限问题的解决

1
2
chmod 755 /Users/yangzie/.oh-my-zsh/plugins/zsh-syntax-highlighting
chmod 755 /Users/yangzie/.oh-my-zsh/plugins/zsh-autosuggestions

https://a1049145827.github.io/2019/05/15/Mac-%E7%8E%AF%E5%A2%83%E5%AE%89%E8%A3%85%E5%B9%B6%E9%85%8D%E7%BD%AE%E7%BB%88%E7%AB%AF%E7%A5%9E%E5%99%A8-oh-my-zsh/

 No I can't stop

Our Example XML File

So to begin with, we’ll need an xml file that we can traverse.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<?xml version="1.0" encoding="UTF-8"?>
<users>
<user type="admin">
<name>Elliot</name>
<social>
<facebook>https://facebook.com</facebook>
<twitter>https://twitter.com</twitter>
<youtube>https://youtube.com</youtube>
</social>
</user>
<user type="reader">
<name>Fraser</name>
<social>
<facebook>https://facebook.com</facebook>
<twitter>https://twitter.com</twitter>
<youtube>https://youtube.com</youtube>
</social>
</user>
</users>

You’ll see the above xml has attributes set on the user tags, nested elements and if you are able to parse this then you should, by extension, be able to parse any xml file regardless of size.

Reading in our File

The first obstacle we’ll have to overcome is reading this file into memory. We can do this by using a combination of the “os” package and the “io/ioutil” package.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
package main


import (
"fmt"
"io/ioutil"
"os"
)

func main() {

// Open our xmlFile
xmlFile, err := os.Open("users.xml")
// if we os.Open returns an error then handle it
if err != nil {
fmt.Println(err)
}

fmt.Println("Successfully Opened users.xml")
// defer the closing of our xmlFile so that we can parse it later on
defer xmlFile.Close()

}

Defining our Structs

Before we can parse our xml file, we need to define some structs. We’ll have one to represent the complete list of users, one to represent our user and then one to represent our users social links.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import (
...
// remember to add encoding/xml to your list of imports
"encoding/xml"
...
)

// our struct which contains the complete
// array of all Users in the file
type Users struct {
XMLName xml.Name `xml:"users"`
Users []User `xml:"user"`
}

// the user struct, this contains our
// Type attribute, our user's name and
// a social struct which will contain all
// our social links
type User struct {
XMLName xml.Name `xml:"user"`
Type string `xml:"type,attr"`
Name string `xml:"name"`
Social Social `xml:"social"`
}

// a simple struct which contains all our
// social links
type Social struct {
XMLName xml.Name `xml:"social"`
Facebook string `xml:"facebook"`
Twitter string `xml:"twitter"`
Youtube string `xml:"youtube"`
}

Unmarshalling Our XML

So above we’ve seen how to load in our file into memory, in order to marshal it we need to convert this file to a byte array and then use the xml.Unmarshal method in order to populate our Users array.

1
2
3
4
5
6
7
8
// read our opened xmlFile as a byte array.
byteValue, _ := ioutil.ReadAll(xmlFile)

// we initialize our Users array
var users Users
// we unmarshal our byteArray which contains our
// xmlFiles content into 'users' which we defined above
xml.Unmarshal(byteValue, &users)

Full Implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
package main

import (
"encoding/xml"
"fmt"
"io/ioutil"
"os"
)

// our struct which contains the complete
// array of all Users in the file
type Users struct {
XMLName xml.Name `xml:"users"`
Users []User `xml:"user"`
}

// the user struct, this contains our
// Type attribute, our user's name and
// a social struct which will contain all
// our social links
type User struct {
XMLName xml.Name `xml:"user"`
Type string `xml:"type,attr"`
Name string `xml:"name"`
Social Social `xml:"social"`
}

// a simple struct which contains all our
// social links
type Social struct {
XMLName xml.Name `xml:"social"`
Facebook string `xml:"facebook"`
Twitter string `xml:"twitter"`
Youtube string `xml:"youtube"`
}

func main() {

// Open our xmlFile
xmlFile, err := os.Open("users.xml")
// if we os.Open returns an error then handle it
if err != nil {
fmt.Println(err)
}

fmt.Println("Successfully Opened users.xml")
// defer the closing of our xmlFile so that we can parse it later on
defer xmlFile.Close()

// read our opened xmlFile as a byte array.
byteValue, _ := ioutil.ReadAll(xmlFile)

// we initialize our Users array
var users Users
// we unmarshal our byteArray which contains our
// xmlFiles content into 'users' which we defined above
xml.Unmarshal(byteValue, &users)

// we iterate through every user within our users array and
// print out the user Type, their name, and their facebook url
// as just an example
for i := 0; i < len(users.Users); i++ {
fmt.Println("User Type: " + users.Users[i].Type)
fmt.Println("User Name: " + users.Users[i].Name)
fmt.Println("Facebook Url: " + users.Users[i].Social.Facebook)
}

}

https://tutorialedge.net/golang/parsing-xml-with-golang/

活着是世界上最罕见的事,

大多数人只是存在,仅此而已。

In this article, we’ll go through different ways to implement like wait groups and mutex and the challenges of data race.

Aim : A program that separates odd and even number from 0to 9 an appends it into their corresponding slices. So we should have odd=[1,3,5,7,9] (in any order) and even=[0,2,4,6,8] (in any order).

Attempt-0 Only with goroutines

Background : The only goroutine that a program has at startup is the one that calls the main function, thus we refer to it as the main goroutine. The go statement generates new goroutines. A go statement is a regular function or method call that has the keyword go prefixed to it. A go statement causes a newly formed goroutine to call the function. The go statement itself is immediately finished.

In this we are appending to the slice using multiple go routines. So we’ll have a main go routines and 10 new go routines created by anonymous go function.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package main
import (
"fmt"
)
func done() {
var odd = make([]int, 0)
var even = make([]int, 0)
for i := 0; i <= 9; i++ {
if i%2 == 0 {
go func(i int) {
even = append(even, i)
}(i)
} else {
go func(i int) {
odd = append(odd, i)
}(i)
}
}
fmt.Println(odd)
fmt.Println(even)
}
func main() {
for i := 1; i <= 10; i++ {
fmt.Println("========================")
done()
}
}

Output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
========================
[1 3 5 7]
[0 2 8]
========================
[7 5 3]
[4 6 0 2 8]
========================
[1 3 5]
[0 2 4 6 8]
========================
[1 3 5 7]
[0 2 4 6 8]
========================
[1 3 5]
[0 2 4 6 8]
========================
[1 3 5 7]
[0 2 4 6 8]
========================
[1 3 5 7]
[0 2 4 6 8]
========================
[1 3 5]
[0 2 4 6 8]
========================
[1 3 5 7]
[0 2 4 6 8]
========================
[1 3 5 7]
[0 2 4 6 8]

Major issues :

  • The main goroutine can complete it’s execution without waiting for the other go routines (who will append the data in slices)to complete. As a result print statement in the main go routine will the slices where the data is still being appended by other go routines.
  • Data race (we’ll cover in the next part)

Attempt-1 With sync.waitgroups

We’ll use sync.waitgroups. With the help of the WaitGroup function in the sync package, a program can wait for particular goroutines. These Golang sync techniques halt programme execution until goroutines in the WaitGroup have finished running.

In a nutshell, the main program will wait till all the other go routines have finished.

WaitGroup is informed by wg.Add(1) that it needs to wait for one more goroutine every time loop starts. After that, defer wg.Done() alerts the WaitGroup when a goroutine is finished. Then, wg.Wait() delays the execution until the goroutines have finished running. The entire procedure resembles increasing a counter in wgAdd() deducts from the wg counter. Waiting for the counter in wg to reach 0 while calling done(). This is the crux how wait groups work.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
package main
import (
"fmt"
"sync"
)
func done() {
var wg sync.WaitGroup
var odd = make([]int, 0)
var even = make([]int, 0)
for i := 0; i <= 9; i++ {
wg.Add(1)
if i%2 == 0 {
go func(i int) {
defer wg.Done()
even = append(even, i)
}(i)
} else {
go func(i int) {
defer wg.Done()
odd = append(odd, i)
}(i)
}
}
wg.Wait()
fmt.Println(odd)
fmt.Println(even)
}
func main() {
for i := 1; i <= 10; i++ {
fmt.Println("========================")
done()
}
}

Output

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
========================
[1 3 5 7 9]
[0 2 4 6 8]
========================
[1 3 9]
[0 2 4 6 8]
========================
[9 5 7 1 3]
[6 8 0 2 4]
========================
[1 3 5 7 9]
[0 2 4 6 8]
========================
[1 3 5 7 9]
[0 2 4 8]
========================
[1 3 5 9 7]
[0 2 4 6 8]
========================
[1 3 5 7 9]
[0 2 4 6 8]
========================
[9 1 3 5 7]
[0 2 4 6 8]
========================
[3 9 1 5 7]
[0 4 2 6 8]
========================
[9 1 7 3 5]
[0 6 8 2 4]

What’s happening now?
To make sure the function was consistent, we ran it ten times. It undoubtedly outperforms the old one, but occasionally the results are not what was anticipated. So what’s the reason. The reason is data race .

Data race : When two or more Goroutines access the same memory location, at least one of them is a write, and there is no ordering between them, this is referred to as a data race.

In simpler terms, you have a race condition with multiple goroutines writing a slice concurrently. The behavior will be unpredictable. You need a mutex. Protect the appends by lock.

We can confirm this by command go run -race pgm3b.go

Attempt-2 With sync.waitgroups and mutex

We’ll use mutex with sync.waitgroups. This not only ensures that the main go routine waits till all other 10 go routines complete but it also ensures that at a given point of time only one go routine can write in a slice. This is the principle of mutual exclusion or mutex. Appends are protected by mutex avoiding dirty write case.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
package main
import (
"fmt"
"sync"
)
func done() {
type answer struct {
MU sync.Mutex
data []int
}
var odd answer
var even answer
wg := &sync.WaitGroup{}
for i := 0; i <= 9; i++ {
if i%2 == 0 {
wg.Add(1)
go func(i int) {
defer wg.Done()
even.MU.Lock()
even.data = append(even.data, i)
even.MU.Unlock()
}(i)
} else if i%2 == 1 {
wg.Add(1)
go func(i int) {
defer wg.Done()
odd.MU.Lock()
odd.data = append(odd.data, i)
odd.MU.Unlock()
}(i)
}
}
wg.Wait()
fmt.Println(odd.data)
fmt.Println(even.data)
}
func main() {
for i := 1; i <= 10; i++ {
fmt.Println("========================")
done()
}
}

output

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
========================
[1 9 5 3 7]
[0 4 2 6 8]
========================
[5 9 3 7 1]
[2 0 8 6 4]
========================
[1 3 5 7 9]
[2 0 4 6 8]
========================
[1 3 5 7 9]
[0 2 4 6 8]
========================
[1 3 5 7 9]
[0 2 4 6 8]
========================
[1 3 5 7 9]
[2 0 4 6 8]
========================
[1 3 5 7 9]
[0 2 4 6 8]
========================
[3 1 9 7 5]
[0 2 4 6 8]
========================
[9 5 7 1 3]
[6 8 4 2 0]
========================
[1 3 5 7 9]
[0 2 4 6 8]

https://blog.devgenius.io/how-to-safely-append-data-to-the-same-slice-concurrently-in-golang-df467e1ebc9c

今生无悔入华夏,来生还做中国人!

问题背景

有一个数据表,记录一个QQ号加好友的活跃天数、加好友次数、加好友的toUin数等信息。数据表的建表语句如下:

1
2
3
4
5
6
7
8
9
echo "drop table if exists uinPortrait"|mysql -proot@mysql 
echo "CREATE TABLE IF NOT EXISTS uinPortrait(
uin int(10) unsigned NOT NULL DEFAULT 0,
active_days int(10) unsigned NOT NULL DEFAULT 0,
add_friend_count int(10) unsigned NOT NULL DEFAULT 0,
add_friend_uin_count int(10) unsigned NOT NULL DEFAULT 0,
black_count int(10) unsigned NOT NULL DEFAULT 0,
black_uin_count int(10) unsigned NOT NULL DEFAULT 0
)ENGINE=MyISAM DEFAULT CHARSET=utf8" |mysql -proot@mysql

由于数据表中的数据存放形式如下:

1
2
3
4
5
6
7
8
9
+----------+-------------+------------------+----------------------+------------+-----------------+
| uin | active_days | add_friend_count | add_friend_uin_count |black_count | black_uin_count |
+----------+-------------+------------------+----------------------+------------+-----------------+
|10000 |1 |2 |2 |0 |0 |
|10000 |0 |0 |0 |4 |3 |
|10001 |1 |3 |2 |0 |0 |
|10001 |0 |0 |0 |5 |5 |
....
+----------+-------------+------------------+----------------------+------------+-----------------+

现在需要将相同的UIN数据归并为一条数据,于是使用了如下SQL:

1
2
3
4
#先建立一张空表
mysql>create table if not exists blankUinPortrait like uinPortrait;

mysql>insert into blankUinPortrait select uin,sum(active_days),sum(add_friend_count),sum(add_friend_uin_count),sum(black_count),sum(black_uin_count) from uinPortrait group by uin;

在执行insert into时,错误如下ERROR 1062 (23000) at line 1: Duplicate entry '1332883220' for key 'group_key'。并非每一个uin插入时都报错,只是零星地报几个。

解决办法

MySQL版本5.1.61。很疑惑,blankUinPortrait并没有设置主键和唯一索引,不知道为什么会出现值冲突,百思不得其解,在网上各种google和baidu也没有找到原因。于是我尝试了重启mysql、将中间数据写到磁盘,再load到数据表,以及将insert into改为replace into都不行。不抛弃,不放弃,黄天不负有心人,终于在stack overflow社区上找到了解决方法,具体参见Duplicate entry for key ‘group_key’

具体做法是修改mysql的配置文件,一般在/etc/my.cnf,将max_heap_table_size=536870912tmp_table_size=536870912添加到/etc/my.cnf

先说一下tmp_table_size
在做GROUP BY操作时会生成临时表,它规定了临时表大小的最大值(实际起限制作用的是tmp_table_sizemax_heap_table_size的最小值。)。如果内存临时表超出了限制,MySQL就会自动地把它转化为基于磁盘的MyISAM表,存储在指定的tmpdir目录下。默认:

1
2
3
4
5
6
mysql> show variables like "tmpdir";
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| tmpdir | /tmp/ |
+---------------+-------+

如果调高该值,MySQL同时将增加heap表的大小,可达到提高联接查询速度的效果,建议尽量优化查询,要确保查询过程中生成的临时表在内存中,避免临时表过大导致生成基于硬盘的MyISAM表 。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
mysql> show global status like ‘created_tmp%‘;

+——————————–+———+

| Variable_name   | Value |

+———————————-+———+

| Created_tmp_disk_tables | 21197 |

| Created_tmp_files   | 58  |

| Created_tmp_tables  | 1771587 |

+——————————–+———–+

每次创建临时表,Created_tmp_tables增加,如果临时表大小超过tmp_table_size,则是在磁盘上创建临时表,Created_tmp_disk_tables也增加,Created_tmp_files表示MySQL服务创建的临时文件文件数,比较理想的配置是:
Created_tmp_disk_tables / Created_tmp_tables * 100% <= 25%比如上面的服务器Created_tmp_disk_tables / Created_tmp_tables * 100% =1.20%,应该比较合适。

show variables like 'max_table_size'可以查看大小,默认是16MB,可调到64-256MB最佳,线程独占,太大可能导致内存不够,I/O堵塞。

关于max_heap_table_size
这个变量定义了用户可以创建的内存表(memory table)的大小,可用来计算内存表的最大行数值。这个变量支持动态改变,即set @max_heap_table_size=#,但对于已经存在的内存表就没有什么用了,除非这个表被重新创建(create table)或者修改(alter table)或者truncate table。服务重启也会设置已经存在的内存表为全局max_heap_table_size的值。

这个变量和tmp_table_size一起限制了内部内存临时表的大小。具体可参见 Section 8.4.4, “Internal Temporary Table Use in MySQL。

show variables like 'max_heap_table_size'可以查看大小,默认是16MB。

https://cloud.tencent.com/developer/article/1176358

I wish I could be more like you.

Sequential Async Call

Let’s start with a basic console program that connects to a few website urls and tests if the connection is successful. There are no goroutines at start, and all calls are made in order which is not very efficient.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
package main

import "fmt"
import "net/http"
import "time"

func main() {
start := time.Now()
links := []string {
"https://go.dev/learn/",
"https://www.netflix.com/",
"https://go.dev/learn/",
"https://www.netflix.com/",
"https://go.dev/learn/",
"https://www.netflix.com/",
"https://github.com/",
}

checkUrls(links)
fmt.Println("Completed the code process, took: %f seconds", time.Since(start).Seconds())
}

func checkUrls(urls []string) {
for _, link := range urls {
checkUrl(link)
}
}
func checkUrl(url string) {
_, err := http.Get(url)

if err != nil {
fmt.Println("We could not reach: ", url)
} else {
fmt.Println("Success reaching the website: ", url)
}
}

Adding Concurrency and Optimizing the code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54

package main

import "fmt"
import "sync"
import "net/http"
import "time"

func main() {
start := time.Now()
links := []string {
"https://go.dev/learn/",
"https://www.netflix.com/",
"https://go.dev/learn/",
"https://www.netflix.com/",
"https://go.dev/learn/",
"https://www.netflix.com/",
"https://github.com/",
}

checkUrls(links)
fmt.Println("Completed the code process, took: %f seconds", time.Since(start).Seconds())
}

func checkUrls(urls []string) {
c := make(chan string)
var wg sync.WaitGroup

for _, link := range urls {
wg.Add(1)
go checkUrl(link, c, &wg)
}

go func() {
wg.Wait()
close(c)
}()

for msg := range c {
fmt.Println(msg)
}
}


func checkUrl(url string, c chan string, wg *sync.WaitGroup) {
defer (*wg).Done()
_, err := http.Get(url)

if err != nil {
c <- "We could not reach: " + url
} else {
c <- "Success reaching the website: " + url
}
}

https://faun.pub/golang-tutorial-how-to-implement-concurrency-with-goroutines-and-channels-67d0f30d9e35

When you become a parent, one thing becomes really clear.
And that's that you want to make sure your children feel safe.

There are some ways to deploy your Golang code, especially when you are using Docker to run your executable file of your Go Project. We can create our image from our project, and we can simply run it on your local computer, or even on the deployment by pulling your image from the registry.

Requirement

Getting Started

First, you need to start your docker daemon by using systemctl start docker or service docker start , use sudo if needed.

Then we will create our simple go HTTP code.

Then we will create our simple go HTTP code.

1
2
3
$ mkdir go-dockerfile && cd go-dockerfile
$ go mod init myapp
$ touch server.go

server.go:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
package main

import (
"os"

"github.com/gin-gonic/gin"
"github.com/joho/godotenv"
)

func init() {
godotenv.Load()
}

func main() {
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}

router := gin.Default()

router.GET("/", func(c *gin.Context) {
c.String(200, "Hello World")
})

router.GET("/env", func(c *gin.Context) {
c.String(200, "Hello %s", os.Getenv("NAME"))
})

router.Run(":" + port)
}

Our server.go will contain a simple gin router and optional godotenv .

/ path will return “Hello World” and /env path will return “Hello ${NAME}”.

Dockerfile

there are several ways to write Dockerfile , but I will make 3 examples with different base images: official golang, alpine, and scratch.

FROM Official Image

1
2
3
4
5
6
7
8
9
10
11
12
13
14
FROM golang:1.16-alpine

WORKDIR /project/go-docker/

# COPY go.mod, go.sum and download the dependencies
COPY go.* ./
RUN go mod download

# COPY All things inside the project and build
COPY . .
RUN go build -o /project/go-docker/build/myapp .

EXPOSE 8080
ENTRYPOINT [ "/project/go-docker/build/myapp" ]

In this Dockerfile , we will split it into some sections:

  • FROM golang:1.16-alpine , we will use golang:1.16-alpine as the base image of this Docker build.
  • WORKDIR , will be our working directory of our command/path of our next commands.
  • COPY go.* ./ , we will copy go.mod & go.sum file from our project to the working directory.
  • RUN go mod download , download the project dependencies from go modules.
  • COPY . . , copy all things from our project into the working directory.
  • RUN go build -o /project/go-docker/build/myapp . , build our project in the working directory and output it in project/go-docker/build/myapp as a binary file.
  • EXPOSE 8080 , telling docker that our code will expose port 8080 .
  • ENTRYPOINT ["/project/go-docker/build/myapp"] , when we run the container of this image, it will start from our build binary.

Any of these duplicate explanations won’t be explained twice. After this we need to run this command:

1
docker build -f Dockerfile -t test-go-docker:latest .
  • -f flag is the filename of our Dockerfile .

  • -t flag is the name of the image later on.

  • . at the end of the command is the directory of the Dockerfile .

Alpine Base Image

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
FROM golang:1.16-alpine as builder

WORKDIR /project/go-docker/

# COPY go.mod, go.sum and download the dependencies
COPY go.* ./
RUN go mod download

# COPY All things inside the project and build
COPY . .
RUN go build -o /project/go-docker/build/myapp .

FROM alpine:latest
COPY --from=builder /project/go-docker/build/myapp /project/go-docker/build/myapp

EXPOSE 8080
ENTRYPOINT [ "/project/go-docker/build/myapp" ]

The difference from the first one:

  • FROM golang:1.16-alpine as builder , we will use golang:1.16-alpine and tag it as builder that later on will be used.
  • FROM alpine:latest , we will create a new base image from alpine .
  • COPY --from=builder /project/go-docker/build/myapp /project/go-docker/build/myapp , copy the build binary file into the new alpine image and run it later on.
    The image size of this Dockerfile is way smaller than the previous image.

FROM Scratch

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
FROM golang:1.16-alpine as builder

WORKDIR /project/go-docker/

# COPY go.mod, go.sum and download the dependencies
COPY go.* ./
RUN go mod download

# COPY All things inside the project and build
COPY . .
RUN go build -o /project/go-docker/build/myapp .

FROM scratch
COPY --from=builder /project/go-docker/build/myapp /project/go-docker/build/myapp

EXPOSE 8080
ENTRYPOINT [ "/project/go-docker/build/myapp" ]

And for the last Dockerfile, we only change the alpine base image into scratch . Scratch is an empty image, so once the container running, we can’t exec into the container because it doesn’t even have a shell command.

The image is slightly smaller than the alpine base image.

try to run the image by using docker run -d -p 8080:8080 test-go-docker:latest , it will forward port 8080 from the container to our 8080 port and access the http://localhost:8080 .

Conclusions

Personally, I will choose the second Dockerfile . Why? because the size is small and it still has several commands and a shell command so we can docker exec into the container and access it. If we use the scratch base image, it will be hard for us to debug our running container because we can’t exec into it.

That’s all for this article about Docker with Go Programming, hope you have a nice day :).