upstream iyangyi.com{ ip_hash; server 192.168.12.1:80; server 192.168.12.2:80 down; server 192.168.12.3:8080 max_fails=3 fail_timeout=20s; server 192.168.12.4:8080; }
While recording the product sales of a multi-tenant application, we need a way to store the metrics in a way that guarantees a solid separation between each tenant, one idea is to use key names like shop:{shopId}:product:{productId}:sales that way we’ll have a key per product for each shop, since product IDs might co-exist in multiple shops, we can increment the values of each key on every purchase and get that value when needed, if we need the sales for the whole business we can do something like:
This will bring the sales of every product inside a given business.
That sounds cool, but seems like you’ll introduce a better approach? I’ve been reading this post from the Instagram Engineering blog and I was amazed about the performance gain they described from using Redis Hashes over regular strings, let me share some of the numbers:
Having 1 Million string keys needed about 70MB of memory
Having 1000 Hashes each with 1000 Keys only needed 17MB!
The reason behind that is that hashes can be encoded efficiently in a very small memory space, so Redis makers recommend that we use hashes whenever possible since “a few keys use a lot more memory than a single key containing a hash with a few fields”, a key represents a Redis Object holds a lot more information than just its value, on the other hand a hash field only hold the value assigned, thus why it’s much more efficient.
This will build a Redis hash with two fields product:1 and products:2 holding the values 100 and 400.
The command hmset gives us the ability to set multiple fields of a hash in one go, there’s a hset command that we can use to set a single field though.
We can read the values of hash fields using the following:
1 2 3 4 5 6 7 8 9 10 11
Redis::hget("shop:{$shopId}:sales", 'product:1'); // To return a single value
Redis::hmget("shop:{$shopId}:sales", 'product:1', 'product:2'); // To return values from multiple keys
Redis::hvals("shop:{$shopId}:sales"); // To return values of all fields
Redis::hgetall("shop:{$shopId}:sales"); // Also returns values of all fields
In case of hmget and hvals the return value is an array of values [100, 400], however in case of hgetall the return value is an array of keys & values:
1
["product:1", 100, "product:2", 400]
Much organized than having multiple keys Yes and you also stop polluting the key namespace with lots of complex-named keys.
With all the above mentioned benefits there are also a number of useful operations you can do on a hash key:
Incrementing & Decrementing
1 2 3 4 5
Redis:hincrby("shop:{$shopId}:sales", "product:1", 18); // To increment the sales of product one by 18
Redis:hincrbyfloat("shop:{$shopId}:sales", "product:1", 18.9); // To increment the sales of product one by 18.9
To decrement you just need to provide a negative value, there’s no decrby command for hash fields.
Field Existence
Like string fields you can check if a hash key exists:
This command returns the string length of the value stored at the given field.
Performance comes with a cost
As we mentioned before, a hash with a few fields is much more efficient than storing a few keys, a key stores a complete Redis object that contains information about the value stored as well as expiration time, idle time, information about the object reference count, and the type of encoding used internally.
Technically if we create 1 key (Redis Object) that contains multiple string fields it’ll require much less memory since every field holds nothing but a reference to the value it holds, and in hashes with small number of fields it’s even encoded into a length-prefixed string in a format like:
1
hashValue = [6]field1[4]val1[6]field2[4]val2
Since a hash field holds only a string value we can’t associate an expiration time for it, the makers of Redis suggest that we store an individual field to hold the expiration time for each field if need be and get both fields together to compare if the field is still alive:
Redis::hmset(‘hashKey’, ‘field1’, ‘field1_value’, ‘field1_expiration’, ‘1495786559’); So whenever we want to use that key we need to bring the expiration value as well and do the extra work ourselves:
Some information about encoding hashes From the Redis docs:
Hashes, when smaller than a given number of fields, and up to a maximum field size, are encoded in a very memory efficient way that uses up to 10 times less memory (with 5 time less memory used being the average saving). Since this is a CPU / memory trade off it is possible to tune the maximum number of fields and maximum field size.
By default hashes are encoded when they contain less than 512 fields or when the largest values stored in a field is less than 64 in length, but you can adjust these values using the config command:
1 2 3 4 5
Redis::config('set', 'hash-max-zipmap-entries', 1000); // Sets the maximum number of fields before the hash stops being encoded
Redis::config('set', 'hash-max-zipmap-value', 128); // Sets the maximum size of a hash field before the hash stops being encoded
A new purchase was made, let’s increment the sales:
1 2 3
Redis::incrby('product:1:sales', 100)
Redis::incr('product:1:count')
Here we increment the sales key by 100, and increment the count key by 1.
We can also decrement in the same way:
1 2 3
Redis::decrby('product:1:sales', 100)
Redis::decr('product:1:count')
But when it comes to dealing with floating point numbers we need to use a special command:
1 2 3
Redis::incrbyfloat('product:1:sales', 15.5)
Redis::incrbyfloat('product:1:sales', - 30.2)
There’s no decrbyfloat command, but we can pass a negative value to the incrbyfloat command to have the same effect.
The incrby, incr, decrby, decr, and incrbyfloat return the value after the operation as a response
Retrieve and update
Now we want to read the latest sales number and reset the counters to zero, maybe we do that at the end of each day:
1
$value = Redis::getset('product:1:sales', 0)
Here $value will hold the 1000 value, if we read the value of that key after this operation it’ll be 0.
Keys Expiration
Let’s say we want to send a notification to the owner when inventory is low, but we only want to send that notification once every 1 hour instead of sending it every time a new purchase is made, so maybe we set a flag once and only send the notification when that flag doesn’t exist:
1
Redis::set('user:1:notified', 1, 'EX', 3600);
Now this key will expire after 3600 seconds (1 hour), we can check if the key exists before attempting to set it and send the notification:
1 2 3 4 5 6 7
if(Redis::get('user:1:notified')){ return; }
Redis::set('user:1:notified', 1, 'EX', 3600);
Notifications::send();
Notice: There’s no guarantee that the value of user:1:notified won’t change between the get and set operations, we’ll discuss atomic command groups later, but this example is enough for you to understand how every individual command works.
We can set the expiration of a key in milliseconds as well using:
1
Redis::set('user:1:notified', 1, 'PX', 3600);
And you may also use the expire command and provide the timeout in seconds:
1
Redis::expire('user:1:notified', 3600);
Or in milliseconds:
1
Redis::pexpire('user:1:notified', 3600);
And if you want the keys to expire at a specific time you can use expireat and provide a Unix timestamp:
1
Redis::expireat('user:1:notified', '1495469730')
Is there a way I can check when a key should expire? You can use the ttl command (Time To Live), which will return the number of seconds remaining until the key expires.
1
Redis::ttl('user:1:notified');
That command may return -2 if the key doesn’t exist, or -1 if the key has no expiration set.
You can also use the pttl command to get the TTL in milliseconds.
What if I want to cancel expiration?
1
Redis::persist('user:1:notified');
This will remove the expiration from your key, it’ll return 1 if OK or 0 if key doesn’t exist or originally had no expiration set.
Keys Existence
Let’s say there’s only 1 laracon ticket available and we need to close purchasing once that ticket is sold, we can do:
1
Redis::set('ticket:sold', $user->id, 'NX')
This will only set the key if it doesn’t exist, the next script that tries to set the key wll receive null as a response from Redis which means that the key wasn’t set.
You can also instruct Redis to set the key only if it exists:
1
Redis::set('ticket:sold', $user->id, 'XX')
If you want to simply check if a key exists, you can use the exists command:
1
Redis::exists('ticket:sold')
Reading multiple keys in one go
Sometimes you might need to read multiple keys in one go, you can do this:
The response of this command is an array with the same size of the given keys, if a key doesn’t exist its value is going to be null.
As we discussed before, Redis executes individual command atomically, that means nothing can change the value of any in the keys once the operation started, so it’s guaranteed that the values returned are not altered in between reading the value of the first key and the last key.
Using mget is better that firing multiple get commands to reduce the RTT (round trip time), which is the time each individual command takes to travel from client to the server and then carry the response back to the client. More on that later.
Deleting Keys
You can also delete multiple keys at once using the del command:
1
Redis::del('previous:sales', 'previous:return');
Renaming Keys
You can rename a key using the rename command:
1
Redis::rename('current:sales', 'previous:sales');
An error is returned if the original key doesn’t exist, and it overrides the second key if it already exists.
Renaming keys is usually a fast operation unless a key with the desired name exists, in that case Redis will try to delete that existing key first before renaming this one, deleting a key that holds a very big value might be a bit slow.
So I have to check first if the second key exists to prevent overriding? Yeah you can use exists to check if the second key exists… OR:
This will check first if the second key exists, if yes it just returns 0 without doing anything, so it only renames the key if the second key does not exist.
Redis is a storage server that persists your data in memory which makes read & write operations very fast, you can also configure it to store the data on disk occasionally, replicate to secondary nodes, and automatically split the data across multiple nodes.
That said, you might want to use Redis when fast access to your data is necessary (caching, live analytics, queue system, etc…), however you’ll need another data storage for when the data is too large to fit in memory or you don’t really need to have fast access, so a combination of Redis & another relational or non-relational database gives you the power to build large scale applications where you can efficiently store large data but also provide a way to read portions of the data very fast.
Use Case
Think of a cloud-based Point Of Sale application for restaurants, as owner it’s extremely important to be able to monitor different statistics related to sales, inventory, performance of different branches, and many other metrics. Let’s focus on one particular metric which is Product Sales, as a developer you want to build a dashboard where the restaurant owner can see live updates on what products have more sales throughout the day.
1 2 3 4 5
select SUM(order_products.price), products.name from order_products join products on order_products.product_id = products.id where DATE(order_products.created_at) = CURDATE() where order_products.status = "done"
The sql query above will bring you a list with the sales each product made throughout the day, but it’s a relatively heavy query to run when the restaurant serves thousands of orders every day, simply put you can’t run this query in a live-updates dashboard that pulls for updates every 60 seconds or something, it’d be cool if you can cache the product sales every time an order is served and be able to read from the cache, something like:
1 2 3 4 5
Event::listen('newOrder', function ($order) { $order->products->each(function($product){ SomeStorage::increment("product:{$product->id}:sales:2017-05-22", $product->sales); ); });
So now on every new order we’ll increment the sales for each product, and we can simply read these numbers later like:
Replace SomeStorage with Redis and now you have today’s product sales living in your server’s memory, and it’s super fast to read from memory, so you don’t have to run that large query every time you need to update the analytics numbers in the live dashboard.
Another Use Case
Now you want to know the number of unique visitors opening your website every day, we might store it in SQL having a table with user_id & date fields, and later on we can just run a query like:
1
select COUNT(Distinct user_id) as count from unique_visits where date = "2017-05-22"
So we just have to add a record in that database table every time a user visits the site. But still on a high traffic website adding this extra DB interaction might not be the best idea, wouldn’t it be cool if we can just do:
Storing this in memory is fast and reading the stored numbers from memory is super fast, that’s exactly what we need, so I hope by now you got a feel on when using Redis might be a good idea. And by the way the method names used in the above examples don’t exist in redis, I’m just trying to give you a feel about what you can achieve.
The Atomic nature of redis operations
Individual commands in Redis are guaranteed to be atomic, that means nothing will change while executing a command, for example:
1
$monthSales = Redis::getSet('monthSales', 0);
This command gets the value of the monthSales key and then sets it to zero, it’s guaranteed that no other client can change the value or maybe rename the key between the get and set operations, that’s due to the single threaded nature of Redis in which a single system process serves all clients at the same time but can only perform 1 operation at a time, it’s similar to how you can listen to multiple client alterations on the project at the same time but can only work on 1 alteration at a given moment.
There’s also a way to guarantee the atomicity of a group of commands using transactions, more on that later, but briefly let’s say you have 2 clients:
1 2 3
Client 1 wants to increment the value Client 1 wants to read that value Client 2 wants to increment the value
These commands might run in the following order:
1 2 3
Client 1: Increment value Client 2: Increment value Client 1: read value
Which will result the read operation from Client 1 to give unexpected results since the value was altered in the middle, that’s when a transaction makes sense.
Exceptions are a very important method for controlling the execution flow of an application. When an application request diverges from the happy path, it’s often important that you halt execution immediately and take another course of action.
Dealing with problems in an API is especially important. The response from the API is the user interface and so you need to ensure you give a detailed and descriptive explanation of what went wrong.
This includes the HTTP Status code, and error code that is linked to your documentation, as well as a human readable description of what went wrong.
In today’s tutorial I’m going to show you how I structure my Laravel API applications to use Exceptions. This structure will make it very easy to return detailed and descriptive error responses from your API, as well as make testing your code a lot easier.
After a brief hiatus I’m returning to writing about Laravel. After writing about PHP week after week for a long time I was really burned out and needed to take a break.
However, over the last couple of months I’ve been doing some of my best work focusing on Laravel API applications. This has given me a renewed focus with lots of new ideas and techniques I want to share with you.
Instead of starting a new project I’m just going to reboot the existing Cribbb series. All of the code from the previous tutorials can still be found within the Git history.
Understanding HTTP Status Codes
The first important thing to understand when building an API is that the Internet is built upon standards and protocols. Your API is an “interface” into your application and so it is very important that you adhere to these standards.
When a web service returns a response from a request, it should include a Status Code. The Status Code describes the response, whether the request was successful or if an error has occurred.
For example, when a request is successful, your API should return a Status Code of 200, when the client makes a bad request, you should return a Status Code of 400, and if there is an internal server error, you should return a Status Code of 500.
By sticking to these Status Codes and using them under the correct conditions, we can make our API easier to consume for third-party developers and applications.
If you are unfamiliar with the standard HTTP Status Codes I would recommend bookmarking the Wikipedia page and referring to it often. You will find you only ever really use a small handful of the status codes, but it is a good idea to be familiar with them.
How this Exception foundation will work
Whenever we return a response from the API it must use one of the standard HTTP status codes. We must also use the correct status code to describe what happened.
Using the incorrect status code is a really bad thing to do because you are giving the consumer of the API bad information. For example, if you return a 200 Status Code instead of a 400 Status Code, the consumer won’t know they are making an invalid request.
Therefore, we should be able to categorise anything that could possible go wrong in the application as one of the standard HTTP Status Codes.
For example, if the client requests a resource that doesn’t exist we should return a 404 Not Found response.
To trigger this response from our code, we can throw a matching NotFoundException Exception.
To do this we can create base Exception classes for each HTTP status code.
Next, in our application code we can create specific Exceptions that extend the base HTTP Exceptions to provide a more granular understanding of what went wrong. For example, we might have a UserNotFound Exception that extends the NotFoundException base Exception class.
This means under an exceptional circumstance we can throw a specific Exception for that problem and let it bubble up to the surface.
The application will automatically return the correct HTTP response from the base Exception class.
Finally we also need a way of providing a descriptive explanation of what went wrong. We can achieve this by defining error messages that will be injected when the exception class is thrown.
Hopefully that makes sense. But even if it doesn’t, continue reading as I think it will all fall into place as we look at some code.
Creating the Errors configuration file
It’s very important that you provide your API responses with a descriptive explanation of what went wrong.
If you don’t provide details of exactly what went wrong the consumers of your API are going to struggle to fix the issue.
To keep all of the possible error responses in one place I’m going to create an errors.php configuration file under the config directory.
This will mean we have all of the possible errors in one place which will make creating documentation a lot easier.
It should also make it easy to provide language translations for the errors too, rather than trying to dig through the code to find every single error!
To begin with I’ve created some errors for a couple of the standard HTTP error responses:
'bad_request' => [ 'title' => 'The server cannot or will not process the request due to something that is perceived to be a client error.', 'detail' => 'Your request had an error. Please try again.' ],
'forbidden' => [ 'title' => 'The request was a valid request, but the server is refusing to respond to it.', 'detail' => 'Your request was valid, but you are not authorised to perform that action.' ],
'not_found' => [ 'title' => 'The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.', 'detail' => 'The resource you were looking for was not found.' ],
'precondition_failed' => [ 'title' => 'The server does not meet one of the preconditions that the requester put on the request.', 'detail' => 'Your request did not satisfy the required preconditions.' ]
];
As the application is developed I can add to this list. It’s also often a good idea to provide a link to the relevant documentation page. At some point in the future I can simply add this into each error.
Creating the abstract Exception
Next I want to create an abstract Exception class that all of my application specific exceptions will extend from.
This will make it easy to catch all of the application specific exceptions and provides a clean separation from the other potential exceptions that may be thrown during the application’s execution.
For each exception I will provide an id, status, title and detail.
This is to stay close to the JSON API specification.
I will also provide a getStatus method
1 2 3 4 5 6 7 8 9
/** * Get the status * * @return int */ publicfunctiongetStatus() { return (int) $this->status; }
The JSON API specification states that the status code should be a string. I’m casting it as an int in this method so I can provide the correct response code to Laravel.
I will also provide a toArray() method to return the Exception as an array. This is just for convenience:
Each base foundation Exception simply needs to provide the status code and a call to the __construct() method that will call the build() method and pass the message to the parent.
You can now create these simple Exception classes to represent each HTTP status code your application will be using.
If you need to add a new HTTP status code response, it’s very easy to just create a new child class.
Creating the application Exceptions
Finally we can use these base HTTP Exceptions within our application code to provide more specific Exceptions.
For example you might have a UserNotFound Exception:
1 2 3 4 5 6 7 8
<?phpnamespaceCribbb\Users\Exceptions;
useCribbb\Exceptions\NotFoundException;
classUserNotFoundextendsNotFoundException {
}
Now whenever you attempt to find a user, but the user is not found you can throw this Exception.
The Exception will bubble up to the surface and the correct HTTP Response will be automatically returned with an appropriate error message.
This means if an Exception is throw, you can just let it go, you don’t have to catch it because the consumer needs to be informed that the user was not found.
And in your tests you can assert a UserNotFound exception was thrown, rather than just a generic NotFound exception. This means you can write tests where you are confident the test is failing for the correct reason. This makes reading your tests very easy to understand.
Dealing with Exceptions and returning the correct response
Laravel allows you to handle Exceptions and return a response in the Handler.php file under the Exceptions namespace.
The first thing I’m going to do is to add the base CribbbException class to the $dontReport array.
1 2 3 4 5 6 7 8 9
/** * A list of the exception types that should not be reported * * @var array */ protected$dontReport = [ HttpException::class, CribbbException::class ];
I don’t need to be told that an application specific Exception has been thrown because this is to be expected. By extending from the base CribbbException class we’ve made it very easy to capture all of the application specific exceptions.
Next I’m going to update the render() method to only render the Exception if we’ve got the app.debug config setting to true, otherwise we can deal with the Exception in the handle() method:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
/** * Render an exception into an HTTP response * * @param Request $request * @param Exception $e * @return Response */ publicfunctionrender($request, Exception$e) { if (config('app.debug')) { returnparent::render($request, $e); }
return$this->handle($request, $e); }
And finally we can convert the Exception into a JsonResponse in the handle() method:
For the CribbbException classes we can simply call the toArray() method to return the Exception into an array as well as the getStatus() method to return the HTTP Status Code.
We can also deal with any other Exception classes in this method. As you can see I’m catching the NotFoundHttpException and MethodNotAllowedHttpException Exceptions in this example so I can return the correct response.
Finally we can return a JsonResponse by using the json() method on the response() helper function method with the $data and $status included.
Conclusion
Exceptions are a very important aspect of application development and they are an excellent tool in controlling the execution flow of the application.
Under exceptional circumstances you need to halt the application and return an error rather than continuing on with execution. Exceptions make this very easy to achieve.
It’s important that an API always returns the correct HTTP status code. The API is the interface to your application and so it is very important that you follow the recognised standards and protocols.
You also need to return a human readable error message as well as provide up-to-date documentation of the problem and how it can be resolved.
In today’s tutorial we’ve created a foundation for using Exceptions in the application by creating base classes for each HTTP status code.
Whenever a problem arrises in the application we have no reason not to return the specific HTTP status code for that problem.
We’ve also put in place an easy way to list detailed error messages for every possible thing that could go wrong.
This will be easy to keep up-to-date because it’s all in one place.
And finally we’ve created an easy way to use Exceptions in the application. By extending these base Exceptions with application specific exceptions we can create a very granular layer of Exceptions within our application code.
This make it very easy to write tests where you can assert that the correct Exception is being thrown under the specific circumstances.
And it also make it really easy to deal with exceptions because 9 times out of 10 you can just let the exception bubble up to the surface.
When the exception reaches the surface, the correct HTTP status code and error message will automatically be returned to the client.
publicfunctiongetArticles() { return$this->whereHas('user', function ($q) { $q->active(); })->get(); }
Prefer to use Eloquent over using Query Builder and raw SQL queries. Prefer collections over arrays
Eloquent allows you to write readable and maintainable code. Also, Eloquent has great built-in tools like soft deletes, events, scopes etc.
Bad:
1 2 3 4 5 6 7 8 9 10 11 12
SELECT * FROM `articles` WHERE EXISTS (SELECT * FROM `users` WHERE `articles`.`user_id` = `users`.`id` AND EXISTS (SELECT * FROM `profiles` WHERE `profiles`.`user_id` = `users`.`id`) AND `users`.`deleted_at` IS NULL) AND `verified` = '1' AND `active` = '1' ORDER BY `created_at` DESC
When creating user authorization system with soft-deletable data we might encounter a problem: deleted user might try to register with same email address and gets an error that it is in use. What to do in order to prevent it? Here is a quite simple example of how it could be solved.
First of all – by default Laravel migrations for users table have a unique index on email field. This needs to be modified – we need to have unique values on email and deleted_at fields at the same time. So let’s write our migration like this:
As you can see, we have a unique index for email and deleted_at at the same time. It is called a composite index. From now on – it is impossible to have two entries that would have identical information in both fields as long as none of them are NULL (except for a situation where your deleted_at field is set to NULL. This is not a bug due the fact that unique allows multiple NULL values in a column: http://dev.mysql.com/doc/refman/5.7/en/create-index.html – see comment below)
A UNIQUE index creates a constraint such that all values in the index must be distinct. An error occurs if you try to add a new row with a key value that matches an existing row. For all engines, a UNIQUE index permits multiple NULL values for columns that can contain NULL. If you specify a prefix value for a column in a UNIQUE index, the column values must be unique within the prefix.
Now, to match a case where our user might not be deleted yet and we don’t want him to register with same email again – we need to change email validation rule:
Open our app/Http/Controllers/Auth/AuthController.php file (Request or other Controller where you have the validation rule) and change your email validation to this:
You might need to modify table name, column name and etc. for your needs.
And that’s it. Your user is able to register again with the same email as he did before and Laravel will make sure that the email is not within active users. Just don’t forget that when restoring it you need to check if there are no active users with identical email. This might not be the best solution for you, so we made a tiny list of other possible solutions. Feel free to choose any other if this doesn’t work for you or suggest us a new one!
Make a second table where you would store deleted users email and set a random string in the original database. On restore just copy the email back and delete the dummy row.
On user delete (using an observer or manually) prepend users email with a prefix: _deleted or something like that.