0%

尔曹身与名俱灭,不废江河万古流

nginx启动和关闭

按照惯例,先说下各个平台的配置情况:

centos平台,源码安装的:

1
2
3
/usr/local/nginx/nginx # 启动
/usr/local/nginx/nginx -s reload #平滑重启
/usr/local/nginx/nginx.conf #配置文件

mac平台,我用brew安装的。

1
2
3
/usr/local/bin/nginx # 启动
/usr/local/bin/nginx -s reload #平滑重启
/usr/local/etc/nginx/nginx.cnf #配置文件

nginx.conf配置文件详解

其实,对比,apache的配置文件,它的相对比较清晰和简单,之前觉得很难,现在沉下心来想想,其实很简单。大致的分块下,基本就分为以下几块:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
main
events {
....
}
http {
....
upstream myproject {
.....
}
server {
....
location {
....
}
}
server {
....
location {
....
}
}
....
}

nginx配置文件主要分为六个区域: main(全局设置)、events(nginx工作模式)、http(http设置)、 sever(主机设置)、location(URL匹配)、upstream(负载均衡服务器设置)。

main模块

下面时一个main区域,他是一个全局的设置:

1
2
3
4
5
user nobody nobody;
worker_processes 2;
error_log /usr/local/var/log/nginx/error.log notice;
pid /usr/local/var/run/nginx/nginx.pid;
worker_rlimit_nofile 1024;

user 来指定Nginx Worker进程运行用户以及用户组,默认由nobody账号运行。

worker_processes来指定了Nginx要开启的子进程数。每个Nginx进程平均耗费10M~12M内存。根据经验,一般指定1个进程就足够了,如果是多核CPU,建议指定和CPU的数量一样的进程数即可。我这里写2,那么就会开启2个子进程,总共3个进程。

error_log用来定义全局错误日志文件。日志输出级别有debug、info、notice、warn、error、crit可供选择,其中,debug输出日志最为最详细,而crit输出日志最少。

pid用来指定进程id的存储文件位置。

worker_rlimit_nofile用于指定一个nginx进程可以打开的最多文件描述符数目,这里是65535,需要使用命令“ulimit -n 65535”来设置。

events 模块

events模块来用指定nginx的工作模式和工作模式及连接数上限,一般是这样:

1
2
3
4
events {
use kqueue; #mac平台
worker_connections 1024;
}

use用来指定Nginx的工作模式。Nginx支持的工作模式有select、poll、kqueue、epoll、rtsig和/dev/poll。其中select和poll都是标准的工作模式,kqueue和epoll是高效的工作模式,不同的是epoll用在Linux平台上,而kqueue用在BSD系统中,因为Mac基于BSD,所以Mac也得用这个模式,对于Linux系统,epoll工作模式是首选。

worker_connections用于定义Nginx每个进程的最大连接数,即接收前端的最大请求数,默认是1024。最大客户端连接数由worker_processes和worker_connections决定,即Max_clients=worker_processes*worker_connections,在作为反向代理时,Max_clients变为:Max_clients = worker_processes * worker_connections/4
进程的最大连接数受Linux系统进程的最大打开文件数限制,在执行操作系统命令ulimit -n 65536worker_connections的设置才能生效。

http 模块

http模块可以说是最核心的模块了,它负责HTTP服务器相关属性的配置,它里面的server和upstream子模块,至关重要,等到反向代理和负载均衡以及虚拟目录等会仔细说。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
http{
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /usr/local/var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
#gzip on;
upstream myproject {
.....
}
server {
....
}
}

下面详细介绍下这段代码中每个配置选项的含义。
include 用来设定文件的mime类型,类型在配置文件目录下的mime.type文件定义,来告诉nginx来识别文件类型。

default_type设定了默认的类型为二进制流,也就是当文件类型未定义时使用这种方式,例如在没有配置asp 的locate 环境时,Nginx是不予解析的,此时,用浏览器访问asp文件就会出现下载了。

log_format用于设置日志的格式,和记录哪些参数,这里设置为main,刚好用于access_log来记录这种类型。

main的类型日志如下:也可以增删部分参数。

127.0.0.1 - - [21/Apr/2015:18:09:54 +0800] “GET /index.php HTTP/1.1” 200 87151 “-“ “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.76 Safari/537.36”

access_log 用来纪录每次的访问日志的文件地址,后面的main是日志的格式样式,对应于log_format的main。

sendfile参数用于开启高效文件传输模式。将tcp_nopush和tcp_nodelay两个指令设置为on用于防止网络阻塞。

keepalive_timeout设置客户端连接保持活动的超时时间。在超过这个时间之后,服务器会关闭该连接。

还有很多各种配置,以后等用到来再说。

server 模块

sever 模块是http的子模块,它用来定一个虚拟主机,我们先讲最基本的配置,这些在后面再讲。

我们看一下一个简单的server 是如何做的?

1
2
3
4
5
6
7
8
9
10
11
server {
listen 8080;
server_name localhost 192.168.12.10 www.yangyi.com;
# 全局定义,如果都是这一个目录,这样定义最简单。
root /Users/yangyi/www;
index index.php index.html index.htm;
charset utf-8;
access_log usr/local/var/log/host.access.log main;
aerror_log usr/local/var/log/host.error.log error;
....
}

server标志定义虚拟主机开始。
listen用于指定虚拟主机的服务端口。
server_name用来指定IP地址或者域名,多个域名之间用空格分开。
root 表示在这整个server虚拟主机内,全部的root web根目录。注意要和locate {}下面定义的区分开来。
index 全局定义访问的默认首页地址。注意要和locate {}下面定义的区分开来。
charset用于设置网页的默认编码格式。
access_log用来指定此虚拟主机的访问日志存放路径,最后的main用于指定访问日志的输出格式。

location 模块

location模块是nginx中用的最多的,也是最重要的模块了,什么负载均衡啊、反向代理啊、虚拟域名啊都与它相关。慢慢来讲:

location 根据它字面意思就知道是来定位的,定位URL,解析URL,所以,它也提供了强大的正则匹配功能,也支持条件判断匹配,用户可以通过location指令实现Nginx对动、静态网页进行过滤处理。像我们的php环境搭建就是用到了它。

我们先来看这个,设定默认首页和虚拟机目录。

1
2
3
4
location / {
root /Users/yangyi/www;
index index.php index.html index.htm;
}

location /表示匹配访问根目录。

root指令用于指定访问根目录时,虚拟主机的web目录,这个目录可以是相对路径(相对路径是相对于nginx的安装目录)。也可以是绝对路径。

index用于设定我们只输入域名后访问的默认首页地址,有个先后顺序:index.php index.html index.htm,如果没有开启目录浏览权限,又找不到这些默认首页,就会报403错误。

location 还有一种方式就是正则匹配,开启正则匹配这样:location 。后面加个

下面这个例子是运用正则匹配来链接php。我们之前搭建环境也是这样做:

1
2
3
4
5
6
location ~ \.php$ {
root /Users/yangyi/www;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi.conf;
}

.php$ 熟悉正则的我们直到,这是匹配.php结尾的URL,用来解析php文件。里面的root也是一样,用来表示虚拟主机的根目录。
fast_pass链接的是php-fpm 的地址,之前我们也搭建过。其他几个参数我们以后再说。

location 还有其他用法,等讲到实例的时候,再看吧。

upstream 模块

upstream 模块负债负载均衡模块,通过一个简单的调度算法来实现客户端IP到后端服务器的负载均衡。我先学习怎么用,具体的使用实例以后再说。

1
2
3
4
5
6
7
upstream iyangyi.com{
ip_hash;
server 192.168.12.1:80;
server 192.168.12.2:80 down;
server 192.168.12.3:8080 max_fails=3 fail_timeout=20s;
server 192.168.12.4:8080;
}

在上面的例子中,通过upstream指令指定了一个负载均衡器的名称iyangyi.com。这个名称可以任意指定,在后面需要的地方直接调用即可。

里面是ip_hash这是其中的一种负载均衡调度算法,下面会着重介绍。紧接着就是各种服务器了。用server关键字表识,后面接ip。

Nginx的负载均衡模块目前支持4种调度算法:

  • weight 轮询(默认)。每个请求按时间顺序逐一分配到不同的后端服务器,如果后端某台服务器宕机,故障系统被自动剔除,使用户访问不受影响。weight。指定轮询权值,weight值越大,分配到的访问机率越高,主要用于后端每个服务器性能不均的情况下。
  • ip_hash。每个请求按访问IP的hash结果分配,这样来自同一个IP的访客固定访问一个后端服务器,有效解决了动态网页存在的session共享问题。
  • fair。比上面两个更加智能的负载均衡算法。此种算法可以依据页面大小和加载时间长短智能地进行负载均衡,也就是根据后端服务器的响应时间来分配请求,响应时间短的优先分配。Nginx本身是不支持fair的,如果需要使用这种调度算法,必须下载Nginx的upstream_fair模块。
  • url_hash。按访问url的hash结果来分配请求,使每个url定向到同一个后端服务器,可以进一步提高后端缓存服务器的效率。Nginx本身是不支持url_hash的,如果需要使用这种调度算法,必须安装Nginx 的hash软件包。

在HTTP Upstream模块中,可以通过server指令指定后端服务器的IP地址和端口,同时还可以设定每个后端服务器在负载均衡调度中的状态。常用的状态有:

  • down,表示当前的server暂时不参与负载均衡。
  • backup,预留的备份机器。当其他所有的非backup机器出现故障或者忙的时候,才会请求backup机器,因此这台机器的压力最轻。
  • max_fails,允许请求失败的次数,默认为1。当超过最大次数时,返回proxy_next_upstream 模块定义的错误。
  • fail_timeout,在经历了max_fails次失败后,暂停服务的时间。max_fails可以和fail_timeout一起使用。

https://www.zybuluo.com/phper/note/89391

The first step is as good as half over.

While recording the product sales of a multi-tenant application, we need a way to store the metrics in a way that guarantees a solid separation between each tenant, one idea is to use key names like shop:{shopId}:product:{productId}:sales that way we’ll have a key per product for each shop, since product IDs might co-exist in multiple shops, we can increment the values of each key on every purchase and get that value when needed, if we need the sales for the whole business we can do something like:

1
Redis::mget("shop:{$shopId}:product:1", "shop:{$shopId}:product:2", ...);

This will bring the sales of every product inside a given business.

That sounds cool, but seems like you’ll introduce a better approach?
I’ve been reading this post from the Instagram Engineering blog and I was amazed about the performance gain they described from using Redis Hashes over regular strings, let me share some of the numbers:

  • Having 1 Million string keys needed about 70MB of memory
  • Having 1000 Hashes each with 1000 Keys only needed 17MB!

The reason behind that is that hashes can be encoded efficiently in a very small memory space, so Redis makers recommend that we use hashes whenever possible since “a few keys use a lot more memory than a single key containing a hash with a few fields”, a key represents a Redis Object holds a lot more information than just its value, on the other hand a hash field only hold the value assigned, thus why it’s much more efficient.

Let’s build our hash

1
Redis::hmset("shop:{$shopId}:sales", "product:1", 100, "products:2", 400);

This will build a Redis hash with two fields product:1 and products:2 holding the values 100 and 400.

The command hmset gives us the ability to set multiple fields of a hash in one go, there’s a hset command that we can use to set a single field though.

We can read the values of hash fields using the following:

1
2
3
4
5
6
7
8
9
10
11
Redis::hget("shop:{$shopId}:sales", 'product:1');
// To return a single value

Redis::hmget("shop:{$shopId}:sales", 'product:1', 'product:2');
// To return values from multiple keys

Redis::hvals("shop:{$shopId}:sales");
// To return values of all fields

Redis::hgetall("shop:{$shopId}:sales");
// Also returns values of all fields

In case of hmget and hvals the return value is an array of values [100, 400], however in case of hgetall the return value is an array of keys & values:

1
["product:1", 100, "product:2", 400]

Much organized than having multiple keys
Yes and you also stop polluting the key namespace with lots of complex-named keys.

With all the above mentioned benefits there are also a number of useful operations you can do on a hash key:

Incrementing & Decrementing

1
2
3
4
5
Redis:hincrby("shop:{$shopId}:sales", "product:1", 18);
// To increment the sales of product one by 18

Redis:hincrbyfloat("shop:{$shopId}:sales", "product:1", 18.9);
// To increment the sales of product one by 18.9

To decrement you just need to provide a negative value, there’s no decrby command for hash fields.

Field Existence

Like string fields you can check if a hash key exists:

1
Redis::hexists("shop:{$shopId}:sales", "product:1");

You can also make sure you don’t override an existing field when that’s not the desired behaviour:

1
Redis::hsetnx("shop:{$shopId}:sales", "product:1");

This will make sure the field doesn’t exist before overriding it.

Other operations

1
Redis::hdel("shop:{$shopId}:sales", "product:1", "product:2");

This command deletes the given fields from the hash.

1
Redis::hstrlen("shop:{$shopId}:sales", "product:1");

This command returns the string length of the value stored at the given field.

Performance comes with a cost

As we mentioned before, a hash with a few fields is much more efficient than storing a few keys, a key stores a complete Redis object that contains information about the value stored as well as expiration time, idle time, information about the object reference count, and the type of encoding used internally.

Technically if we create 1 key (Redis Object) that contains multiple string fields it’ll require much less memory since every field holds nothing but a reference to the value it holds, and in hashes with small number of fields it’s even encoded into a length-prefixed string in a format like:

1
hashValue = [6]field1[4]val1[6]field2[4]val2

Since a hash field holds only a string value we can’t associate an expiration time for it, the makers of Redis suggest that we store an individual field to hold the expiration time for each field if need be and get both fields together to compare if the field is still alive:

Redis::hmset(‘hashKey’, ‘field1’, ‘field1_value’, ‘field1_expiration’, ‘1495786559’);
So whenever we want to use that key we need to bring the expiration value as well and do the extra work ourselves:

1
Redis::hmget('hashKey', 'field1', 'field1_expiration');

Some information about encoding hashes
From the Redis docs:

Hashes, when smaller than a given number of fields, and up to a maximum field size, are encoded in a very memory efficient way that uses up to 10 times less memory (with 5 time less memory used being the average saving). Since this is a CPU / memory trade off it is possible to tune the maximum number of fields and maximum field size.

By default hashes are encoded when they contain less than 512 fields or when the largest values stored in a field is less than 64 in length, but you can adjust these values using the config command:

1
2
3
4
5
Redis::config('set', 'hash-max-zipmap-entries', 1000);
// Sets the maximum number of fields before the hash stops being encoded

Redis::config('set', 'hash-max-zipmap-value', 128);
// Sets the maximum size of a hash field before the hash stops being encoded

https://divinglaravel.com/redis/redis-hashes

The wealth of the mind is the only wealth.

Let’s store our product sales in a key:

1
2
Redis::set('product:1:sales', 1000)
Redis::set('product:1:count', 10)

Now to read it we use:

1
Redis::get('product:1:sales')

Incrementing and Decrementing counters

A new purchase was made, let’s increment the sales:

1
2
3
Redis::incrby('product:1:sales', 100)

Redis::incr('product:1:count')

Here we increment the sales key by 100, and increment the count key by 1.

We can also decrement in the same way:

1
2
3
Redis::decrby('product:1:sales', 100)

Redis::decr('product:1:count')

But when it comes to dealing with floating point numbers we need to use a special command:

1
2
3
Redis::incrbyfloat('product:1:sales', 15.5)

Redis::incrbyfloat('product:1:sales', - 30.2)

There’s no decrbyfloat command, but we can pass a negative value to the incrbyfloat command to have the same effect.

The incrby, incr, decrby, decr, and incrbyfloat return the value after the operation as a response

Retrieve and update

Now we want to read the latest sales number and reset the counters to zero, maybe we do that at the end of each day:

1
$value = Redis::getset('product:1:sales', 0)

Here $value will hold the 1000 value, if we read the value of that key after this operation it’ll be 0.

Keys Expiration

Let’s say we want to send a notification to the owner when inventory is low, but we only want to send that notification once every 1 hour instead of sending it every time a new purchase is made, so maybe we set a flag once and only send the notification when that flag doesn’t exist:

1
Redis::set('user:1:notified', 1, 'EX', 3600);

Now this key will expire after 3600 seconds (1 hour), we can check if the key exists before attempting to set it and send the notification:

1
2
3
4
5
6
7
if(Redis::get('user:1:notified')){
return;
}

Redis::set('user:1:notified', 1, 'EX', 3600);

Notifications::send();

Notice: There’s no guarantee that the value of user:1:notified won’t change between the get and set operations, we’ll discuss atomic command groups later, but this example is enough for you to understand how every individual command works.

We can set the expiration of a key in milliseconds as well using:

1
Redis::set('user:1:notified', 1, 'PX', 3600);

And you may also use the expire command and provide the timeout in seconds:

1
Redis::expire('user:1:notified', 3600);

Or in milliseconds:

1
Redis::pexpire('user:1:notified', 3600);

And if you want the keys to expire at a specific time you can use expireat and provide a Unix timestamp:

1
Redis::expireat('user:1:notified', '1495469730')

Is there a way I can check when a key should expire?
You can use the ttl command (Time To Live), which will return the number of seconds remaining until the key expires.

1
Redis::ttl('user:1:notified');

That command may return -2 if the key doesn’t exist, or -1 if the key has no expiration set.

You can also use the pttl command to get the TTL in milliseconds.

What if I want to cancel expiration?

1
Redis::persist('user:1:notified');

This will remove the expiration from your key, it’ll return 1 if OK or 0 if key doesn’t exist or originally had no expiration set.

Keys Existence

Let’s say there’s only 1 laracon ticket available and we need to close purchasing once that ticket is sold, we can do:

1
Redis::set('ticket:sold', $user->id, 'NX')

This will only set the key if it doesn’t exist, the next script that tries to set the key wll receive null as a response from Redis which means that the key wasn’t set.

You can also instruct Redis to set the key only if it exists:

1
Redis::set('ticket:sold', $user->id, 'XX')

If you want to simply check if a key exists, you can use the exists command:

1
Redis::exists('ticket:sold')

Reading multiple keys in one go

Sometimes you might need to read multiple keys in one go, you can do this:

1
Redis::mget('product:1:sales', 'product:2:sales', 'non_existing_key')

The response of this command is an array with the same size of the given keys, if a key doesn’t exist its value is going to be null.

As we discussed before, Redis executes individual command atomically, that means nothing can change the value of any in the keys once the operation started, so it’s guaranteed that the values returned are not altered in between reading the value of the first key and the last key.

Using mget is better that firing multiple get commands to reduce the RTT (round trip time), which is the time each individual command takes to travel from client to the server and then carry the response back to the client. More on that later.

Deleting Keys

You can also delete multiple keys at once using the del command:

1
Redis::del('previous:sales', 'previous:return');

Renaming Keys

You can rename a key using the rename command:

1
Redis::rename('current:sales', 'previous:sales');

An error is returned if the original key doesn’t exist, and it overrides the second key if it already exists.

Renaming keys is usually a fast operation unless a key with the desired name exists, in that case Redis will try to delete that existing key first before renaming this one, deleting a key that holds a very big value might be a bit slow.

So I have to check first if the second key exists to prevent overriding?
Yeah you can use exists to check if the second key exists… OR:

1
Redis::renamenx('current:sales', 'previous:sales');

This will check first if the second key exists, if yes it just returns 0 without doing anything, so it only renames the key if the second key does not exist.

https://divinglaravel.com/redis/redis-commands

Honesty is the best policy.

Redis is a storage server that persists your data in memory which makes read & write operations very fast, you can also configure it to store the data on disk occasionally, replicate to secondary nodes, and automatically split the data across multiple nodes.

That said, you might want to use Redis when fast access to your data is necessary (caching, live analytics, queue system, etc…), however you’ll need another data storage for when the data is too large to fit in memory or you don’t really need to have fast access, so a combination of Redis & another relational or non-relational database gives you the power to build large scale applications where you can efficiently store large data but also provide a way to read portions of the data very fast.

Use Case

Think of a cloud-based Point Of Sale application for restaurants, as owner it’s extremely important to be able to monitor different statistics related to sales, inventory, performance of different branches, and many other metrics. Let’s focus on one particular metric which is Product Sales, as a developer you want to build a dashboard where the restaurant owner can see live updates on what products have more sales throughout the day.

1
2
3
4
5
select SUM(order_products.price), products.name
from order_products
join products on order_products.product_id = products.id
where DATE(order_products.created_at) = CURDATE()
where order_products.status = "done"

The sql query above will bring you a list with the sales each product made throughout the day, but it’s a relatively heavy query to run when the restaurant serves thousands of orders every day, simply put you can’t run this query in a live-updates dashboard that pulls for updates every 60 seconds or something, it’d be cool if you can cache the product sales every time an order is served and be able to read from the cache, something like:

1
2
3
4
5
Event::listen('newOrder', function ($order) {
$order->products->each(function($product){
SomeStorage::increment("product:{$product->id}:sales:2017-05-22", $product->sales);
);
});

So now on every new order we’ll increment the sales for each product, and we can simply read these numbers later like:

1
2
3
$sales = Product::all()->map(function($product){
return SomeStorage::get("product:{$product->id}:sales:2017-05-22");
});

Replace SomeStorage with Redis and now you have today’s product sales living in your server’s memory, and it’s super fast to read from memory, so you don’t have to run that large query every time you need to update the analytics numbers in the live dashboard.

Another Use Case

Now you want to know the number of unique visitors opening your website every day, we might store it in SQL having a table with user_id & date fields, and later on we can just run a query like:

1
select COUNT(Distinct user_id) as count from unique_visits where date = "2017-05-22"

So we just have to add a record in that database table every time a user visits the site. But still on a high traffic website adding this extra DB interaction might not be the best idea, wouldn’t it be cool if we can just do:

1
SomeStorage::addUnique('unique_visits:2017-05-22', $user->id);

And later on we can do:

1
SomeStorage::count('unique_visits:2017-05-22');

Storing this in memory is fast and reading the stored numbers from memory is super fast, that’s exactly what we need, so I hope by now you got a feel on when using Redis might be a good idea. And by the way the method names used in the above examples don’t exist in redis, I’m just trying to give you a feel about what you can achieve.

The Atomic nature of redis operations

Individual commands in Redis are guaranteed to be atomic, that means nothing will change while executing a command, for example:

1
$monthSales = Redis::getSet('monthSales', 0);

This command gets the value of the monthSales key and then sets it to zero, it’s guaranteed that no other client can change the value or maybe rename the key between the get and set operations, that’s due to the single threaded nature of Redis in which a single system process serves all clients at the same time but can only perform 1 operation at a time, it’s similar to how you can listen to multiple client alterations on the project at the same time but can only work on 1 alteration at a given moment.

There’s also a way to guarantee the atomicity of a group of commands using transactions, more on that later, but briefly let’s say you have 2 clients:

1
2
3
Client 1 wants to increment the value
Client 1 wants to read that value
Client 2 wants to increment the value

These commands might run in the following order:

1
2
3
Client 1: Increment value
Client 2: Increment value
Client 1: read value

Which will result the read operation from Client 1 to give unexpected results since the value was altered in the middle, that’s when a transaction makes sense.

https://divinglaravel.com/redis/before-the-dive

现在努力的你,未来的事情谁说得清楚呢? 但你不努力,未来是怎样大概都知道了。

Exceptions are a very important method for controlling the execution flow of an application. When an application request diverges from the happy path, it’s often important that you halt execution immediately and take another course of action.

Dealing with problems in an API is especially important. The response from the API is the user interface and so you need to ensure you give a detailed and descriptive explanation of what went wrong.

This includes the HTTP Status code, and error code that is linked to your documentation, as well as a human readable description of what went wrong.

In today’s tutorial I’m going to show you how I structure my Laravel API applications to use Exceptions. This structure will make it very easy to return detailed and descriptive error responses from your API, as well as make testing your code a lot easier.

After a brief hiatus I’m returning to writing about Laravel. After writing about PHP week after week for a long time I was really burned out and needed to take a break.

However, over the last couple of months I’ve been doing some of my best work focusing on Laravel API applications. This has given me a renewed focus with lots of new ideas and techniques I want to share with you.

Instead of starting a new project I’m just going to reboot the existing Cribbb series. All of the code from the previous tutorials can still be found within the Git history.

Understanding HTTP Status Codes

The first important thing to understand when building an API is that the Internet is built upon standards and protocols. Your API is an “interface” into your application and so it is very important that you adhere to these standards.

When a web service returns a response from a request, it should include a Status Code. The Status Code describes the response, whether the request was successful or if an error has occurred.

For example, when a request is successful, your API should return a Status Code of 200, when the client makes a bad request, you should return a Status Code of 400, and if there is an internal server error, you should return a Status Code of 500.

By sticking to these Status Codes and using them under the correct conditions, we can make our API easier to consume for third-party developers and applications.

If you are unfamiliar with the standard HTTP Status Codes I would recommend bookmarking the Wikipedia page and referring to it often. You will find you only ever really use a small handful of the status codes, but it is a good idea to be familiar with them.

How this Exception foundation will work

Whenever we return a response from the API it must use one of the standard HTTP status codes. We must also use the correct status code to describe what happened.

Using the incorrect status code is a really bad thing to do because you are giving the consumer of the API bad information. For example, if you return a 200 Status Code instead of a 400 Status Code, the consumer won’t know they are making an invalid request.

Therefore, we should be able to categorise anything that could possible go wrong in the application as one of the standard HTTP Status Codes.

For example, if the client requests a resource that doesn’t exist we should return a 404 Not Found response.

To trigger this response from our code, we can throw a matching NotFoundException Exception.

To do this we can create base Exception classes for each HTTP status code.

Next, in our application code we can create specific Exceptions that extend the base HTTP Exceptions to provide a more granular understanding of what went wrong. For example, we might have a UserNotFound Exception that extends the NotFoundException base Exception class.

This means under an exceptional circumstance we can throw a specific Exception for that problem and let it bubble up to the surface.

The application will automatically return the correct HTTP response from the base Exception class.

Finally we also need a way of providing a descriptive explanation of what went wrong. We can achieve this by defining error messages that will be injected when the exception class is thrown.

Hopefully that makes sense. But even if it doesn’t, continue reading as I think it will all fall into place as we look at some code.

Creating the Errors configuration file

It’s very important that you provide your API responses with a descriptive explanation of what went wrong.

If you don’t provide details of exactly what went wrong the consumers of your API are going to struggle to fix the issue.

To keep all of the possible error responses in one place I’m going to create an errors.php configuration file under the config directory.

This will mean we have all of the possible errors in one place which will make creating documentation a lot easier.

It should also make it easy to provide language translations for the errors too, rather than trying to dig through the code to find every single error!

To begin with I’ve created some errors for a couple of the standard HTTP error responses:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<?php

return [

/*
|—————————————————————————————————————
| Default Errors
|—————————————————————————————————————
*/

'bad_request' => [
'title' => 'The server cannot or will not process the request due to something that is perceived to be a client error.',
'detail' => 'Your request had an error. Please try again.'
],

'forbidden' => [
'title' => 'The request was a valid request, but the server is refusing to respond to it.',
'detail' => 'Your request was valid, but you are not authorised to perform that action.'
],

'not_found' => [
'title' => 'The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.',
'detail' => 'The resource you were looking for was not found.'
],

'precondition_failed' => [
'title' => 'The server does not meet one of the preconditions that the requester put on the request.',
'detail' => 'Your request did not satisfy the required preconditions.'
]

];

As the application is developed I can add to this list. It’s also often a good idea to provide a link to the relevant documentation page. At some point in the future I can simply add this into each error.

Creating the abstract Exception

Next I want to create an abstract Exception class that all of my application specific exceptions will extend from.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<?php namespace Cribbb\Exceptions;

use Exception;

abstract class CribbbException extends Exception
{
/**
* @var string
*/
protected $id;

/**
* @var string
*/
protected $status;

/**
* @var string
*/
protected $title;

/**
* @var string
*/
protected $detail;

/**
* @param @string $message
* @return void
*/
public function __construct($message)
{
parent::__construct($message);
}
}

This will make it easy to catch all of the application specific exceptions and provides a clean separation from the other potential exceptions that may be thrown during the application’s execution.

For each exception I will provide an id, status, title and detail.

This is to stay close to the JSON API specification.

I will also provide a getStatus method

1
2
3
4
5
6
7
8
9
/**  
* Get the status
*
* @return int
*/
public function getStatus()
{
return (int) $this->status;
}

The JSON API specification states that the status code should be a string. I’m casting it as an int in this method so I can provide the correct response code to Laravel.

I will also provide a toArray() method to return the Exception as an array. This is just for convenience:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
/**  
* Return the Exception as an array
*
* @return array
*/
public function toArray()
{
return [
'id' => $this->id,
'status' => $this->status,
'title' => $this->title,
'detail' => $this->detail
];
}

Finally I need to get the title and detail for each specific error from the errors.php file.

To do this I will accept the exception id when a new Exception is instantiated.

I will then use this id to get the title and detail from the errors.php file.

So throwing an Exception will look like this:

1
throw new UserNotFound('user_not_found');  

However, I will also want to provide specific details of the Exception under certain circumstances.

For example I might want to provide the user’s id in the exception detail.

To do this I will allow the exception to accept an arbitrary number of arguments:

1
throw new UserNotFound('user_not_found', $id);  

To set up the Exception with the correct message I will use the following method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/**  
* Build the Exception
*
* @param array $args
* @return string
*/
protected function build(array $args)
{
$this->id = array_shift($args);

$error = config(sprintf('errors.%s', $this->id));

$this->title = $error['title'];
$this->detail = vsprintf($error['detail'], $args);

return $this->detail;
}

In this method I first pop off the first argument as this will be the id.

Next I get the title and detail from the errors.php configuration file using the id.

Next I vsprintf the remaining arguments into the detail string if they have been passed into the exception.

Finally I can return the detail to be used as the default Exception message.

Creating the base Exceptions

With the abstract Exception in place I can now create the base Exceptions.

For example, here is the NotFoundException

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<?php namespace Cribbb\Exceptions;

class NotFoundException extends CribbbException
{
/**
* @var string
*/
protected $status = '404’;

/**
* @return void
*/
public function __construct()
{
$message = $this->build(func_get_args());

parent::__construct($message);
}
}

Each base foundation Exception simply needs to provide the status code and a call to the __construct() method that will call the build() method and pass the message to the parent.

You can now create these simple Exception classes to represent each HTTP status code your application will be using.

If you need to add a new HTTP status code response, it’s very easy to just create a new child class.

Creating the application Exceptions

Finally we can use these base HTTP Exceptions within our application code to provide more specific Exceptions.

For example you might have a UserNotFound Exception:

1
2
3
4
5
6
7
8
<?php namespace Cribbb\Users\Exceptions;

use Cribbb\Exceptions\NotFoundException;

class UserNotFound extends NotFoundException
{

}

Now whenever you attempt to find a user, but the user is not found you can throw this Exception.

The Exception will bubble up to the surface and the correct HTTP Response will be automatically returned with an appropriate error message.

This means if an Exception is throw, you can just let it go, you don’t have to catch it because the consumer needs to be informed that the user was not found.

And in your tests you can assert a UserNotFound exception was thrown, rather than just a generic NotFound exception. This means you can write tests where you are confident the test is failing for the correct reason. This makes reading your tests very easy to understand.

Dealing with Exceptions and returning the correct response

Laravel allows you to handle Exceptions and return a response in the Handler.php file under the Exceptions namespace.

The first thing I’m going to do is to add the base CribbbException class to the $dontReport array.

1
2
3
4
5
6
7
8
9
/**  
* A list of the exception types that should not be reported
*
* @var array
*/
protected $dontReport = [
HttpException::class,
CribbbException::class
];

I don’t need to be told that an application specific Exception has been thrown because this is to be expected. By extending from the base CribbbException class we’ve made it very easy to capture all of the application specific exceptions.

Next I’m going to update the render() method to only render the Exception if we’ve got the app.debug config setting to true, otherwise we can deal with the Exception in the handle() method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
/**  
* Render an exception into an HTTP response
*
* @param Request $request
* @param Exception $e
* @return Response
*/
public function render($request, Exception $e)
{
if (config('app.debug')) {
return parent::render($request, $e);
}

return $this->handle($request, $e);
}

And finally we can convert the Exception into a JsonResponse in the handle() method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
/**  
* Convert the Exception into a JSON HTTP Response
*
* @param Request $request
* @param Exception $e
* @return JSONResponse
*/
private function handle($request, Exception $e) {
if ($e instanceOf CribbbException) {
$data = $e->toArray();
$status = $e->getStatus();
}

if ($e instanceOf NotFoundHttpException) {
$data = array_merge([
'id' => 'not_found',
'status' => '404'
], config('errors.not_found'));

$status = 404;
}

if ($e instanceOf MethodNotAllowedHttpException) {
$data = array_merge([
'id' => 'method_not_allowed',
'status' => '405'
], config('errors.method_not_allowed'));

$status = 405;
}

return response()->json($data, $status);
}

For the CribbbException classes we can simply call the toArray() method to return the Exception into an array as well as the getStatus() method to return the HTTP Status Code.

We can also deal with any other Exception classes in this method. As you can see I’m catching the NotFoundHttpException and MethodNotAllowedHttpException Exceptions in this example so I can return the correct response.

Finally we can return a JsonResponse by using the json() method on the response() helper function method with the $data and $status included.

Conclusion

Exceptions are a very important aspect of application development and they are an excellent tool in controlling the execution flow of the application.

Under exceptional circumstances you need to halt the application and return an error rather than continuing on with execution. Exceptions make this very easy to achieve.

It’s important that an API always returns the correct HTTP status code. The API is the interface to your application and so it is very important that you follow the recognised standards and protocols.

You also need to return a human readable error message as well as provide up-to-date documentation of the problem and how it can be resolved.

In today’s tutorial we’ve created a foundation for using Exceptions in the application by creating base classes for each HTTP status code.

Whenever a problem arrises in the application we have no reason not to return the specific HTTP status code for that problem.

We’ve also put in place an easy way to list detailed error messages for every possible thing that could go wrong.

This will be easy to keep up-to-date because it’s all in one place.

And finally we’ve created an easy way to use Exceptions in the application. By extending these base Exceptions with application specific exceptions we can create a very granular layer of Exceptions within our application code.

This make it very easy to write tests where you can assert that the correct Exception is being thrown under the specific circumstances.

And it also make it really easy to deal with exceptions because 9 times out of 10 you can just let the exception bubble up to the surface.

When the exception reaches the surface, the correct HTTP status code and error message will automatically be returned to the client.

不登高山,不知天之高也;不临深溪,不知地之厚也。

添加访问控制的Lua脚本

1
[root@hbase31 ~]# vim /usr/local/openresty/nginx/conf/lua/access.lua
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
local ip_block_time=300 --封禁IP时间(秒)
local ip_time_out=30 --指定ip访问频率时间段(秒)
local ip_max_count=20 --指定ip访问频率计数最大值(秒)
local BUSINESS = ngx.var.business --nginx的location中定义的业务标识符

--连接redis
local redis = require "resty.redis"
local conn = redis:new()
ok, err = conn:connect("192.168.1.30", 6379)
conn:set_timeout(2000) --超时时间2秒

--如果连接失败,跳转到脚本结尾
if not ok then
goto FLAG
end

--查询ip是否被禁止访问,如果存在则返回403错误代码
is_block, err = conn:get(BUSINESS.."-BLOCK-"..ngx.var.remote_addr)
if is_block == '1' then
ngx.exit(403)
goto FLAG
end

--查询redis中保存的ip的计数器
ip_count, err = conn:get(BUSINESS.."-COUNT-"..ngx.var.remote_addr)

if ip_count == ngx.null then --如果不存在,则将该IP存入redis,并将计数器设置为1、该KEY的超时时间为ip_time_out
res, err = conn:set(BUSINESS.."-COUNT-"..ngx.var.remote_addr, 1)
res, err = conn:expire(BUSINESS.."-COUNT-"..ngx.var.remote_addr, ip_time_out)
else
ip_count = ip_count + 1 --存在则将单位时间内的访问次数加1

if ip_count >= ip_max_count then --如果超过单位时间限制的访问次数,则添加限制访问标识,限制时间为ip_block_time
res, err = conn:set(BUSINESS.."-BLOCK-"..ngx.var.remote_addr, 1)
res, err = conn:expire(BUSINESS.."-BLOCK-"..ngx.var.remote_addr, ip_block_time)
else
res, err = conn:set(BUSINESS.."-COUNT-"..ngx.var.remote_addr,ip_count)
res, err = conn:expire(BUSINESS.."-COUNT-"..ngx.var.remote_addr, ip_time_out)
end
end

-- 结束标记
::FLAG::
local ok, err = conn:close()

这个脚本的目的很简单:一个IP如果在30秒内其访问次数达到20次则表明该IP访问频率太快了,因此将该IP封禁5分钟。同时由于计数的KEY在Redis中的超时时间设置成了30秒,所以如果两次访问间隔时间大于30秒将会重新开始计数

在Nginx需要限速的location中引用上述脚本

1
2
3
4
5
6
7
8
9
location /user/ {
set $business "USER";
access_by_lua_file /usr/local/openresty/nginx/conf/lua/access.lua;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://user_224/user/;
}

注:对于有大量静态资源文件(如:js、css、图片等)的前端页面可以设置只有指定格式的请求才进行访问限速,示例代码如下:

1
2
3
4
5
6
7
8
9
10
11
location /h5 {
if ($request_uri ~ .*\.(html|htm|jsp|json)) {
set $business "H5";
access_by_lua_file /usr/local/openresty/nginx/conf/lua/access.lua;
}
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://h5_224/h5;
}

https://juejin.im/entry/5a45a9b8f265da431f4b62d5

着意闻时不肯香,香在无心处。

Single responsibility principle

A class and a method should have only one responsibility.

Bad:

1
2
3
4
5
6
7
8
public function getFullNameAttribute()
{
if (auth()->user() && auth()->user()->hasRole('client') && auth()->user()->isVerified()) {
return 'Mr. ' . $this->first_name . ' ' . $this->middle_name . ' ' $this->last_name;
} else {
return $this->first_name[0] . '. ' . $this->last_name;
}
}

Good:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public function getFullNameAttribute()
{
return $this->isVerifiedClient() ? $this->getFullNameLong() : $this->getFullNameShort();
}

public function isVerfiedClient()
{
return auth()->user() && auth()->user()->hasRole('client') && auth()->user()->isVerified();
}

public function getFullNameLong()
{
return 'Mr. ' . $this->first_name . ' ' . $this->middle_name . ' ' . $this->last_name;
}

public function getFullNameShort()
{
return $this->first_name[0] . '. ' . $this->last_name;
}

Fat models, skinny controllers

Put all DB related logic into Eloquent models or into Repository classes if you’re using Query Builder or raw SQL queries.

Bad:

1
2
3
4
5
6
7
8
9
10
public function index()
{
$clients = Client::verified()
->with(['orders' => function ($q) {
$q->where('created_at', '>', Carbon::today()->subWeek());
}])
->get();

return view('index', ['clients' => $clients]);
}

Good:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public function index()
{
return view('index', ['clients' => $this->client->getWithNewOrders()]);
}

Class Client extends Model
{
public function getWithNewOrders()
{
return $this->verified()
->with(['orders' => function ($q) {
$q->where('created_at', '>', Carbon::today()->subWeek());
}])
->get();
}
}

Validation

Move validation from controllers to Request classes.

Bad:

1
2
3
4
5
6
7
8
9
10
public function store(Request $request)
{
$request->validate([
'title' => 'required|unique:posts|max:255',
'body' => 'required',
'publish_at' => 'nullable|date',
]);

....
}

Good:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public function store(PostRequest $request)
{
....
}

class PostRequest extends Request
{
public function rules()
{
return [
'title' => 'required|unique:posts|max:255',
'body' => 'required',
'publish_at' => 'nullable|date',
];
}
}

Business logic should be in service class

A controller must have only one responsibility, so move business logic from controllers to service classes.

Bad:

1
2
3
4
5
6
7
8
public function store(Request $request)
{
if ($request->hasFile('image')) {
$request->file('image')->move(public_path('images') . 'temp');
}

....
}

Good:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public function store(Request $request)
{
$this->articleService->handleUploadedImage($request->file('image'));

....
}

class ArticleService
{
public function handleUploadedImage($image)
{
if (!is_null($image)) {
$image->move(public_path('images') . 'temp');
}
}
}

Don’t repeat yourself (DRY)

Reuse code when you can. SRP is helping you to avoid duplication. Also, reuse Blade templates, use Eloquent scopes etc.

Bad:

1
2
3
4
5
6
7
8
9
10
11
public function getActive()
{
return $this->where('verified', 1)->whereNotNull('deleted_at')->get();
}

public function getArticles()
{
return $this->whereHas('user', function ($q) {
$q->where('verified', 1)->whereNotNull('deleted_at');
})->get();
}

Good:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public function scopeActive($q)
{
return $q->where('verified', 1)->whereNotNull('deleted_at');
}

public function getActive()
{
return $this->active()->get();
}

public function getArticles()
{
return $this->whereHas('user', function ($q) {
$q->active();
})->get();
}

Prefer to use Eloquent over using Query Builder and raw SQL queries. Prefer collections over arrays

Eloquent allows you to write readable and maintainable code. Also, Eloquent has great built-in tools like soft deletes, events, scopes etc.

Bad:

1
2
3
4
5
6
7
8
9
10
11
12
SELECT *
FROM `articles`
WHERE EXISTS (SELECT *
FROM `users`
WHERE `articles`.`user_id` = `users`.`id`
AND EXISTS (SELECT *
FROM `profiles`
WHERE `profiles`.`user_id` = `users`.`id`)
AND `users`.`deleted_at` IS NULL)
AND `verified` = '1'
AND `active` = '1'
ORDER BY `created_at` DESC

Good:

1
Article::has('user.profile')->verified()->latest()->get();

Mass assignment

Bad:

1
2
3
4
5
6
7
$article = new Article;
$article->title = $request->title;
$article->content = $request->content;
$article->verified = $request->verified;
// Add category to article
$article->category_id = $category->id;
$article->save();

Good:

1
$category->article()->create($request->all());

Do not execute queries in Blade templates and use eager loading (N + 1 problem)

Bad (for 100 users, 101 DB queries will be executed):

1
2
3
@foreach (User::all() as $user)
{{ $user->profile->name }}
@endforeach

Good (for 100 users, 2 DB queries will be executed):

1
2
3
4
5
6
7
$users = User::with('profile')->get();

...

@foreach ($users as $user)
{{ $user->profile->name }}
@endforeach

Comment your code, but prefer descriptive method and variable names over comments

Bad:

1
if (count((array) $builder->getQuery()->joins) > 0)

Better:

1
2
// Determine if there are any joins.
if (count((array) $builder->getQuery()->joins) > 0)

Good:

1
if ($this->hasJoins())

Do not put JS and CSS in Blade templates and do not put any HTML in PHP classes

Bad:

1
let article = `{{ json_encode($article) }}`;

Better:

1
<input id="article" type="hidden" value="{{ json_encode($article) }}">

Or

1
<button class="js-fav-article" data-article="{{ json_encode($article) }}">{{ $article->name }}<button>

In a Javascript file:

1
2
let article = $('#article').val();
The best way is to use specialized PHP to JS package to transfer the data.

Use config and language files, constants instead of text in the code

Bad:

1
2
3
4
5
6
public function isNormal()
{
return $article->type === 'normal';
}

return back()->with('message', 'Your article has been added!');

Good:

1
2
3
4
5
6
public function isNormal()
{
return $article->type === Article::TYPE_NORMAL;
}

return back()->with('message', __('app.article_added'));

Use shorter and more readable syntax where possible

Bad:

1
2
$request->session()->get('cart');
$request->input('name');

Good:

1
2
session('cart');
$request->name;

Use IoC container or facades instead of new Class

new Class syntax creates tight coupling between classes and complicates testing. Use IoC container or facades instead.

Bad:

1
2
$user = new User;
$user->create($request->all());

Good:

1
2
3
4
5
6
public function __construct(User $user)
{
$this->user = $user;
}

$this->user->create($request->all());

Do not get data from the .env file directly

Pass the data to config files instead and then use the config() helper function to use the data in an application.

Bad:

1
$apiKey = env('API_KEY');

Good:

1
2
3
4
5
// config/api.php
'key' => env('API_KEY'),

// Use the data
$apiKey = config('api.key');

Store dates in the standard format. Use accessors and mutators to modify date format

Bad:

1
2
{{ Carbon::createFromFormat('Y-d-m H-i', $object->ordered_at)->toDateString() }}
{{ Carbon::createFromFormat('Y-d-m H-i', $object->ordered_at)->format('m-d') }}

Good:

1
2
3
4
5
6
7
8
9
10
// Model
protected $dates = ['ordered_at', 'created_at', 'updated_at']
public function getMonthDayAttribute($date)
{
return $date->format('m-d');
}

// View
{{ $object->ordered_at->toDateString() }}
{{ $object->ordered_at->monthDay }}

Other good practices

Never put any logic in routes files.

Minimize usage of vanilla PHP in Blade templates.

https://github.com/alexeymezenin/laravel-best-practices
https://laravel-china.org/articles/12762/eighteen-best-practices-of-laravel

人人尽说江南好,游人只合江南老。春水碧于天,画船听雨眠。 垆边人似月,皓腕凝霜雪。未老莫还乡,还乡须断肠。

定义连接

在你的数据库配置文件app/config/ database.php你可以定义任何类型的多个数据库连接。事实上,你可以定义尽可能多的连接,只要你愿意。例如,如果你的应用程序有2个 MySQL数据库中提取数据,则可以分别定义他们两个。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<?php
return array(

'default' => 'mysql',

'connections' => array(

# 主数据库连接
'mysql' => array(
'driver' => 'mysql',
'host' => 'host1',
'database' => 'database1',
'username' => 'user1',
'password' => 'pass1'
'charset' => 'utf8',
'collation' => 'utf8_unicode_ci',
'prefix' => '',
),

# 次数据连接
'mysql2' => array(
'driver' => 'mysql',
'host' => 'host2',
'database' => 'database2',
'username' => 'user2',
'password' => 'pass2'
'charset' => 'utf8',
'collation' => 'utf8_unicode_ci',
'prefix' => '',
),
),
);

我们默认的连接(default)仍设置为 mysql,这意味着,除非我们特别指定,应用程序将继续使用 mysql 连接.

Schema

1
2
3
4
Schema::connection('mysql2')->create('lp_table', function($table)
{
$table->increments('id'):
});

DB类

1
DB::connection('mysql2')->table('article')->where...

Eloquent

1
2
3
4
5
6
7
8
9
<?php

class SomeModel extends Eloquent {

protected $connection = 'mysql2';
.......
}

?>

或者

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<?php

class SomeController extends BaseController {

public function someMethod()
{
$someModel = new SomeModel;

$someModel->setConnection('mysql2');

$something = $someModel->find(1);

return $something;
}
.......
}

?>

https://www.itlipeng.cn/2015/12/24/%E5%9C%A8-laravel-%E4%B8%AD%E4%BD%BF%E7%94%A8%E5%A4%9A%E4%B8%AA%E6%95%B0%E6%8D%AE%E5%BA%93%E6%93%8D%E4%BD%9C%E6%95%B0%E6%8D%AE/

两岸青山相对出,孤帆一片日边来

问题

为了方便实时预览前端开发过程中修改源码后的页面,推荐一个非常实用的工具,browser-sync。

安装使用方式请自行到官网https://browsersync.io/参考文档,仓库地址在这里https://github.com/BrowserSync/browser-sync

GetStart中官网给出的CLI示例命令为:

1
browser-sync start --server --files "css/*.css"

我将其写到到npm命令中,package.json 相关内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
{
...

"scripts": {
"dev": "browser-sync start --server --files 'css/*.css'"
},
"devDependencies": {
"browser-sync": "^2.18.13"
},

...
}

接着执行 npm run dev,控制台输出一切正常。

然而,当我修改 css/style.css 这个文件的时候,发现浏览器并没有刷新,这说明 browser-sync 并未成功监听 css/*.css 文件的修改。

分析

为此,我翻了一遍 browser-syncissue,发现有人遇到相同的问题,也给出了解决方案。

问题出在命令行参数上,仔细对比,我们也会发现:

我写的CLI命令为

1
browser-sync start --server --files 'css/*.css'

而官方CLI命令为

1
browser-sync start --server --files "css/*.css"

问题就出在分号的不同上(browser-sync没能解析单引号的内容)

解决
因此,我将 npm命令 中的 ‘ 替换为 " 即可解决问题。更改后的 package.json 内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
{
...

"scripts": {
"dev": "browser-sync start --server --files \"css/*.css\""
},
"devDependencies": {
"browser-sync": "^2.18.13"
},

...
}

https://blog.csdn.net/pwc1996/article/details/76849876

松下问童子,言师采药去。只在此山中,云深不知处。

When creating user authorization system with soft-deletable data we might encounter a problem: deleted user might try to register with same email address and gets an error that it is in use. What to do in order to prevent it? Here is a quite simple example of how it could be solved.

First of all – by default Laravel migrations for users table have a unique index on email field. This needs to be modified – we need to have unique values on email and deleted_at fields at the same time. So let’s write our migration like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public function up()
{
Schema::create('users', function (Blueprint $table) {
$table->increments('id');
$table->string('name');
$table->string('email');
$table->string('password');
$table->rememberToken();
$table->timestamps();
$table->softDeletes();

$table->unique(['email', 'deleted_at']);
});
}

As you can see, we have a unique index for email and deleted_at at the same time. It is called a composite index. From now on – it is impossible to have two entries that would have identical information in both fields as long as none of them are NULL (except for a situation where your deleted_at field is set to NULL. This is not a bug due the fact that unique allows multiple NULL values in a column: http://dev.mysql.com/doc/refman/5.7/en/create-index.html – see comment below)

A UNIQUE index creates a constraint such that all values in the index must be distinct. An error occurs if you try to add a new row with a key value that matches an existing row. For all engines, a UNIQUE index permits multiple NULL values for columns that can contain NULL. If you specify a prefix value for a column in a UNIQUE index, the column values must be unique within the prefix.

Now, to match a case where our user might not be deleted yet and we don’t want him to register with same email again – we need to change email validation rule:

Open our app/Http/Controllers/Auth/AuthController.php file (Request or other Controller where you have the validation rule) and change your email validation to this:

1
'email' => 'required|email|max:255|unique:users,email,NULL,id,deleted_at,NULL',

You might need to modify table name, column name and etc. for your needs.

And that’s it. Your user is able to register again with the same email as he did before and Laravel will make sure that the email is not within active users. Just don’t forget that when restoring it you need to check if there are no active users with identical email. This might not be the best solution for you, so we made a tiny list of other possible solutions. Feel free to choose any other if this doesn’t work for you or suggest us a new one!

  • Make a second table where you would store deleted users email and set a random string in the original database. On restore just copy the email back and delete the dummy row.
  • On user delete (using an observer or manually) prepend users email with a prefix: _deleted or something like that.
  • … your suggestions?

http://laraveldaily.com/make-soft-deleted-user-email-available/