使用redis记录产品的每日浏览量
全体听令,除我之外~冲锋!
配置文件database.php
1 | 'redis' => [ |
连接存储
1 | // 连接 |
全体听令,除我之外~冲锋!
1 | 'redis' => [ |
1 | // 连接 |
你懂的越多,懂你的越少。最近Laravel文档里的一句配置引起了我的兴趣
1 | * * * * * php /path/to/artisan schedule:run >> /dev/null 2>&1 |
shell中可能经常能看到:
1:> 代表重定向到哪里,例如:echo “123″ > /home/123.txt
2:/dev/null 代表空设备文件
3:2> 表示stderr标准错误
4:& 表示等同于的意思,2>&1,表示2的输出重定向等同于1
5:1 表示stdout标准输出,系统默认值是1,所以”>/dev/null”等同于 “1>/dev/null”
分解这个组合:“>/dev/null 2>&1” 为五部分。
因此,>/dev/null 2>&1也可以写成“1> /dev/null 2> &1”
那么本文标题的语句执行过程为:
1>/dev/null :首先表示标准输出重定向到空设备文件,也就是不输出任何信息到终端,说白了就是不显示任何信息。
2>&1 :接着,标准错误输出重定向 到 标准输出,因为之前标准输出已经重定向到了空设备文件,所以标准错误输出也重定向到空设备文件。
关山难越,谁悲失路之人? 萍水相逢,尽是他乡之客。HP作为一门web开发语言,通常情况下我们都是在Web Server中运行PHP,使用浏览器访问,因此很少关注其命令行操作以及相关参数的使用,但是,特别是在类Unix操作系统上,PHP可以作为一门脚本语言执行与shell类似的处理任务。 > 运行时间:php-cli默认运行时间是无穷,而网页php默认设置是30s。
运行指定php文件
1 | php my_script.php |
运行php代码
1 | php -r "print_r(get_defined_constants());" |
语法检查
1 | php -l index.php |
查看版本信息
1 | php -v |
显示配置文件
1 | php --ini |
接收参数
1 | <?php |
开启服务器
1 | // 在指定目录下运行该命令,通过url访问http://localhost:8000/hello.php |
“大圣,此去欲何?” “踏南天,碎凌霄。” “若一去不回……” “便一去不回!”### 双向数据绑定
1 | <body id="app"> |
1 | window.onload = function () { |
1 | <body id="app"> |
1 | window.onload = function () { |
1 | <body id="app"> |
1 | window.onload = function () { |
我想,最好的感情是两个人都用力的活,一起体验人生的种种趣味,也能包容与鼓励对方。当对方为你打开新的世界,你就没有因为喜欢一个人而拒绝了整个世界。
CORS是一个W3C标准,全称是”跨域资源共享”(Cross-origin resource sharing)。
它允许浏览器向跨源服务器,发出XMLHttpRequest请求,从而克服了AJAX只能同源使用的限制。
CORS需要浏览器和服务器同时支持。目前,所有浏览器都支持该功能,IE浏览器不能低于IE10。
整个CORS通信过程,都是浏览器自动完成,不需要用户参与。对于开发者来说,CORS通信与同源的AJAX通信没有差别,代码完全一样。浏览器一旦发现AJAX请求跨源,就会自动添加一些附加的头信息,有时还会多出一次附加的请求,但用户不会有感觉。
因此,实现CORS通信的关键是服务器。只要服务器实现了CORS接口,就可以跨源通信。
浏览器将CORS请求分成两类:简单请求(simple request)和非简单请求(not-so-simple request)。
只要同时满足以下两大条件,就属于简单请求。
1.请求方法是以下三种方法之一:
2.HTTP的头信息不超出以下几种字段:
非简单请求是那种对服务器有特殊要求的请求,比如请求方法是PUT或DELETE,或者Content-Type字段的类型是application/json。
非简单请求的CORS请求,会在正式通信之前,增加一次HTTP查询请求,称为”预检”请求(preflight)。
浏览器先询问服务器,当前网页所在的域名是否在服务器的许可名单之中,以及可以使用哪些HTTP动词和头信息字段。只有得到肯定答复,浏览器才会发出正式的XMLHttpRequest请求,否则就报错。
“预检”请求用的请求方法是OPTIONS
,表示这个请求是用来询问的。头信息里面,关键字段是Origin,表示请求来自哪个源。
为防止浏览器预检发出 options 请求 时出现 404 错误, 我们可以直接将 options请求结束掉;
以下是ThinkPHP 5利用行为特性解决跨域问题;
1 | <?php |
jquery 请求
1 | <script> |
服务端响应
1 | Route::post('/onValidateEmail', function() { |
CORS与JSONP的使用目的相同,但是比JSONP更强大。
JSONP只支持GET请求,CORS支持所有类型的HTTP请求。JSONP的优势在于支持老式浏览器,以及可以向不支持CORS的网站请求数据。
如果你爱一个人,一定要告诉他,不是为了要他报答,而是让他在以后黑暗的日子里,否定自己的时候,想起世界上还有人这么爱他,他并非一无是处。### 安装方法
1 | pip install matplotlib |
创建single_variable.py,内容如下:
1 | # coding:utf-8 |
创建sinx.py,内容如下:
1 | # coding:utf-8 |
创建multi_axis.py内容如下:
1 | # coding:utf-8 |
创建plot_3d.py,内容如下:
1 | # coding:utf-8 |
创建plot_3d_scatter.py,内容如下:
1 | # coding:utf-8 |
创建plot_3d_surface.py,内容如下:
1 | # coding:utf-8 |
和你们这些少爷不同,我们光是活着就竭尽全力了。Lately, here at Tryolabs, we started gaining interest in big data and search related platforms which are giving us excellent resources to create our complex web applications. One of them is Elasticsearch. Elastic{ON}15, the first ES conference is coming, and since nowadays we see a lot of interest in this technology, we are taking the opportunity to give an introduction and a simple example for Python developers out there that want to begin using it or give it a try. ### 1. What is Elasticsearch?
Elasticsearch is a distributed, real-time, search and analytics platform.
Good question! In the previous definition you can see all these hype-sounding tech terms (distributed, real-time, analytics), so let’s try to explain.
ES is distributed, it organizes information in clusters of nodes, so it will run in multiple servers if we intend it to.
ES is real-time, since data is indexed, we get responses to our queries super fast!
And last but not least, it does searches and analytics. The main problem we are solving with this tool is exploring our data!
A platform like ES is the foundation for any respectable search engine.
Using a restful API, Elasticsearch saves data and indexes it automatically. It assigns types to fields and that way a search can be done smartly and quickly using filters and different queries.
It’s uses JVM in order to be as fast as possible. It distributes indexes in “shards” of data. It replicates shards in different nodes, so it’s distributed and clusters can function even if not all nodes are operational. Adding nodes is super easy and that’s what makes it so scalable.
ES uses Lucene to solve searches. This is quite an advantage with comparing with, for example, Django query strings. A restful API call allows us to perform searches using json objects as parameters, making it much more flexible and giving each search parameter within the object a different weight, importance and or priority.
The final result ranks objects that comply with the search query requirements. You could even use synonyms, autocompletes, spell suggestions and correct typos. While the usual query strings provides results that follow certain logic rules, ES queries give you a ranked list of results that may fall in different criteria and its order depend on how they comply with a certain rule or filter.
ES can also provide answers for data analysis, like averages, how many unique terms and or statistics. This could be done using aggregations. To dig a little deeper in this feature check the documentation here.
The main point is scalability and getting results and insights very fast. In most cases using Lucene could be enough to have all you need.
It seems sometimes that these tools are designed for projects with tons of data and are distributed in order to handle tons of users. Startups dream of growing to that scenario, but may start thinking small first to build a prototype and then when the data is there, start thinking about scaling problems.
Does it make sense and pays off to be prepared to grow A LOT? Why not? Elasticsearch has no drawback and is easy to use, so it’s just a decision of using it to be prepared for the future.
I’m going to give you a quick example of a dead simple project using Elasticsearch to quickly and beautifully search for some example data. It will be quick to do, Python powered and ready to scale in case we need it to, so, best of both worlds.
For the following part it would be nice to be familiarized with concepts like Cluster, Node, Document, Index. Take a look at the official guide if you have doubts.
First things first, get ES from here.
I followed this video tutorial to get things started in just a minute. I recommend all you to check it out later.
Once you downloaded ES, it’s as simple as running bin/elasticsearch and you will have your ES cluster with one node running! You can interact with it at http://localhost:9200/
If you hit it you will get something like this:
1 | { |
view rawes_first.json hosted with ❤ by GitHub
Creating another node is as simple as:
bin/elasticsearch -Des.node.name=Node-2
It automatically detects the old node as its master and joins our cluster. By default we will be able to communicate with this new node using the 9201 port http://localhost:9201. Now we can talk with each node and receive the same data, they are supposed to be identical.
To use ES with our all time favorite language; Python, it gets easier if we install elasticsearch-py package.
pip install elasticsearch
Now we will be able to use this package to index and search data using Python.
So, I wanted to make this project a “real world example”, I really did, but after I found out there is a star wars API (http://swapi.co/), I couldn’t resist it and ended up being a fictional - ”galaxy far far away” example. The API is dead simple to use, so we will get some data from there.
I’m using an IPython Notebook to do this test, I started with the sample request to make sure we can hit the ES server.
1 | import requests |
view rawes_first.py hosted with ❤ by GitHub
Then we connect to our ES server using Python and the elasticsearch-py library:
#connect to our cluster
1 | from elasticsearch import Elasticsearch |
view rawes_first3.py hosted with ❤ by GitHub
I added some data to test, and then deleted it. I’m skipping that part for this guide, but you can check it out in the notebook.
Now, using The Force, we connect to the Star Wars API and index some fictional people.
#let’s iterate over swapi people documents and index them
1 | import json |
view rawes_first4.py hosted with ❤ by GitHub
Please, notice that we automatically created an index “sw” and a “doc_type” with de indexing command. We get 17 responses from swapi and index them with ES. I’m sure there are much more “people” in the swapi DB, but it seems we are getting a 404 with http://swapi.co/api/people/17. Bug report here! :-)
Anyway, to see if all worked with this few results, we try to get the document with id=5.
es.get(index=’sw’, doc_type=’people’, id=5)
view rawes_first10.py hosted with ❤ by GitHub
We will get Princess Leia:
1 | {u'_id': u'5', |
view rawes_first5.py hosted with ❤ by GitHub
Now, let’s add more data, this time using node 2! And let’s start at the 18th person, where we stopped.
1 | r = requests.get('http://localhost:9201') |
view rawes_first6.py hosted with ❤ by GitHub
We got the rest of the characters just fine.
Where is Darth Vader? Here is our search query:
es.search(index=”sw”, body={“query”: {“match”: {‘name’:’Darth Vader’}}})
view rawes_first7.py hosted with ❤ by GitHub
This will give us both Darth Vader AND Darth Maul. Id 4 and id 44 (notice that they are in the same index, even if we use different node client call the index command). Both results have a score, although Darth Vader is much higher than Darth Maul (2.77 vs 0.60) since Vader is a exact match. Take that Darth Maul!
1 | {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5}, |
view rawes_first8.py hosted with ❤ by GitHub
So, this query will give us results if the word is contained exactly in our indexed data. What if we want to build some kind of autocomplete input where we get the names that contain the characters we are typing?
There are many ways to do that and another great number of queries. Take a look here to learn more. I picked this one to get all documents with prefix “lu” in their name field:es.search(index="sw", body={"query": {"prefix" : { "name" : "lu" }}})
view rawes_first9.py hosted with ❤ by GitHub
We will get Luke Skywalker and Luminara Unduli, both with the same 1.0 score, since they match with the same 2 initial characters.
1 | {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5}, |
view rawes_first11.py hosted with ❤ by GitHub
There are many other interesting queries we can do. If, for example, we want to get all elements similar in some way, for a related or correction search we can use something like this:
es.search(index=”sw”, body={“query”:
{“fuzzy_like_this_field” : { “name” :
{“like_text”: “jaba”, “max_query_terms”:5}}}})
view rawes_first1.py hosted with ❤ by GitHub
And we got Jabba although we had a typo in our search query. That is powerful!
1 | {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5}, |
view rawes_first2.py hosted with ❤ by GitHub
This was just a simple overview on how to set up your Elasticsearch server and start working with some data using Python. The code used here is publicly available in this IPython notebook.
We encourage you to learn more about ES and specially take a look at the Elastic stack where you will be able to see beautiful analytics and insights with Kibana and go through logs using Logstash.
In following posts we will talk about more advanced ES features and we will try to extend this simple test and use it to show a more interesting Django app powered by this data and by ES.
Hope this post was useful for developers trying to enter the ES world.
At Tryolabs we’re Elastic official partners. If you want to talk about Elasticsearch, ELK, applications and possible projects using these technologies, drop us a line to hello@tryolabs.com (or fill out this form) and we will be glad to connect!
留给中国队的时间已经不多了。“function函数”是一等公民!`编译阶段`,会把定义式的函数优先执行,也会把所有var变量创建,默认值为undefined,以提高程序的执行效率! 总结:当JavaScript引擎解析脚本时,它会在预编译期对所有声明的变量和函数进行处理!并且是先预声明变量,再预定义函数!
1 | var v='Hello World'; |
提示说“undefined”
函数声明方式提升【成功】
1 | function myTest(){ |
函数表达式方式提升【失败】
1 | function myTest(){ |
全怪我们太穷了,又不认识人。
一般我们写的获取ip的方式:
1 | function GetIP(){ |
其实这是有问题的,通过header我们可以轻易改变ip:
1 | $curl = curl_init(); //初始化一个curl对象 |
解决方法:
在判断的时候以$_SERVER["REMOTE_ADDR"]
优先。
初学,不喜勿喷
1 | import threading |
兴趣是最好的老师,其次是耻辱
简言之,HTTP Referer是header的一部分,当浏览器向web服务器发送请求的时候,一般会带上Referer,告诉服务器我是从哪个页面链接过来的,服务器籍此可以获得一些信息用于处理。比如从我主页上链接到一个朋友那里,他的服务器就能够从HTTP Referer中统计出每天有多少用户点击我主页上的链接访问他的网站。
从一个https页面上的链接访问到一个非加密的http页面的时候,在http页面上是检查不到HTTP Referer的,比如当我点击自己的https页面下面的w3c xhtml验证图标(网址为http://validator.w3.org/check?uri=referer),从来都无法完成校验,提示:
No Referer header found!
原来,在http协议的rfc文档中有定义:
15.1.3 Encoding Sensitive Information in URI’s
…
Clients SHOULD NOT include a Referer header field in a (non-secure)
HTTP request if the referring page was transferred with a secure
protocol.
这样是出于安全的考虑,访问非加密页时,如果来源是加密页,客户端不发送Referer,IE一直都是这样实现的,Firefox浏览器也不例外。但这并不影响从加密页到加密页的访问。
从一个https页面上的链接访问到一个非加密的http页面的时候,在http页面上是检查不到HTTP Referer的,但是这是在https中未做设置的情况下,如facebook,