1. Project will be implemented on GCP. Setting up our instance first.
- Create a new project.
- Setup a new Firewall rule, enable request from all address. Set source IP as
'0.0.0.0/0'
; protocols and ports are set to betcp:9200,8080
. - Create a new GCE instance, selet Ubuntu 18.04 system as image. Allow HTTP and HTTPS traffic. And don’r forget to set the Networking rule and add the new Firewall-rule-tag we’ve just created.
- Start the Compute Engine instance. Use
SSH
and open in browser window. - Update the system and do all necessary configurations… (Installing all necessary packages: install software-properties-common, install golang, install vim, install git…)
2. Implement the initial Golang backend handler.
2.1 Create a main.go file and define the structure for Post and Location.
- Import necessary packages:
"encoding/json"
for now. - This backend handler need to handle the 1) Post request form the React frontend and 2) Location-based search request from frontend. Therefore, here we could set the Structure of both. Location:
1
2
3
4type Location struct {
Lat float64 `json:"lat"`
Lon float64 `json:"lon"`
}
- For Post, according to requests received from fronend, there are five fields we need to handle here, inlcuding : User (the username), Message (the message within this post), Location (the Geo-location where the post was sent), Url (the media file url), Type (whether its an image-post or a video-post), Face (the confidency score of how possible this image including a face, this is provided by the ml script we’ll add later on).
1
2
3
4
5
6
7
8type Post struct {
User string `json:"user"`
Message string `json:"message"`
Location Location `json:"location"`
Url string `json:"url"`
Type string `json:"type"`
Face float64 `json:"face"`
}
2.2 Add the handlerPost() method to handle Post.
Within this function, we have two params: w (response writter) and r (the request from frontend).
Parse the body from request to get it as an Json object. Create a new
Post
asp
and print it.1
2
3
4
5
6
7
8
9
10
11func handlerPost(w http.ResponseWriter, r *http.Request) {
fmt.Println("Received one post request")
decoder := json.NewDecoder(r.Body)
var p Post
if err := decoder.Decode(&p); err != nil {
http.Error(w, "Cannot decode post data from client", http.StatusBadRequest)
fmt.Printf("Cannot decode post data from client %v.\n", err)
return
}
fmt.Fprintf(w, "Post received: %s\n", p.Message)
}And then call this handler method in
func main()
1
2
3
4
5func main() {
fmt.Println("started-service")
http.HandleFunc("/post", handlerPost)
log.Fatal(http.ListenAndServe(":8080", nil))
}
2.3 Add the handlerSearch() method to handle search request from clients, and return a Json object.
We also have two params (w and r) as the same in handlerPost(). But this time the useful Latitude and Longitude information are sent within the request url instead of body.
So we get it by using
URL.Query().Get()
and convert it to float by Golang format converting packagestrconv
.Set a range of whitin how far we’d like to see others’ post, 200km for example.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22import (
"fmt"
"net/http"
"encoding/json"
"log"
"strconv"
)
const (
DISTANCE = "200km"
)
func handlerSearch(w http.ResponseWriter, r *http.Request) {
fmt.Println("Received one request for search")
lat, _ := strconv.ParseFloat(r.URL.Query().Get("lat"), 64)
lon, _ := strconv.ParseFloat(r.URL.Query().Get("lon"), 64)
ran := DISTANCE
if val := r.URL.Query().Get("range"); val != "" {
ran = val + "km"
}
w.Header().Set("Content-Type", "application/json")
}Call handlerSearch in Main
1
http.HandleFunc("/search", handlerSearch)
3. Setup Elastic Search serving as NoSQL data storage.
3.1 Save posts to Elastic Search basing on Geo-index.
Elastic Search: Elastic Search is an open source, distributed, RESTful search engin. It provids Geo-based query. In this project, Elastic Search will be used as database to store all Post data.
Install Java and ElasticSearch on GCE instance.
Config
elasticsearch.yml
file to set network.host as 0.0.0.0 and http.port to 9200.Enable
sudo systemctl enable elasticsearch
and start elesticsearchsudo systemctl start elasticsearch
. Check connection to elastic search by type URL:http://{your_ip_address}:9200
into your browser. The ip_address is the GCE instance external IP address.Connect Go programm to elastic search. * Install Go elastic library
go get github.com/olivere/elastic
and import it. Add elastic search URL to constancs:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17import (
"context"
"fmt"
"net/http"
"encoding/json"
"log"
"strconv"
"github.com/olivere/elastic"
"github.com/pborman/uuid"
)
const (
POST_INDEX = "post"
POST_TYPE = "post"
DISTANCE = "200km"
ES_URL = "http://YOUR_ES_IP_ADDRESS:9200"
)For each post, we need to first create a client if it not exist, and then create Geo-index if it not exist. Here, we add a new method
createIndexIfNotExist()
to handel this.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37func main() {
...
createIndexIfNotExist()
}
func createIndexIfNotExist() {
// Create a new client
client, err := elastic.NewClient(elastic.SetURL(ES_URL), elastic.SetSniff(false))
if err != nil {
panic(err)
}
// check if index exists
exists, err := client.IndexExists(POST_INDEX).Do(context.Background())
if err != nil {
panic(err)
}
// if not create one
if !exists {
mapping := `{
"mappings": {
"post": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}`
_, err = client.CreateIndex(POST_INDEX).Body(mapping).Do(context.Background())
if err != nil {
panic(err)
}
}
}
3.2 Get posts from Elastic Search basing on Geo-distance query.
After creating Geo-index, we need to POST our Posts to Elastic Search. Here we add a new method
saveToES()
to handle this, and call it inhandlerPost()
.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49func saveToES(post *Post, id string) error {
// create a new connection to ES
client, err := elastic.NewClient(elastic.SetURL(ES_URL), elastic.SetSniff(false))
if err != nil {
return err
}
_, err = client.Index().
Index(POST_INDEX).
Type(POST_TYPE).
Id(id).
BodyJson(post).
Refresh("wait_for").
Do(context.Background())
if err != nil {
return err
}
fmt.Printf("Post is saved to index: %s\n", post.Message)
return nil
}
func handlerPost(w http.ResponseWriter, r *http.Request) {
...
// Set response Header. Allow cross-domain visit.
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type,Authorization")
decoder := json.NewDecoder(r.Body)
var p Post
if err := decoder.Decode(&p); err != nil {
panic(err)
}
fmt.Fprintf(w, "Post received: %v %s %v %v\n", p.User, p.Message, p.Location.Lat, p.Location.Lon)
// create a new id
id := uuid.New()
// call saveToES() method to POST to ES
err := saveToES(&p, id)
if err != nil {
http.Error(w, "Failed to save post to ElasticSearch", http.StatusInternalServerError)
fmt.Printf("Failed to save post to ElasticSearch %v.\n", err)
return
}
fmt.Printf("Saved one post to ElasticSearch: %s", p.Message)
}We also need to add method
readFromES()
to handle reading data from ES, and update thehandlerSearch()
(Code are explained inline).1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69import (
...
"reflect"
)
func readFromES(lat, lon float64, ran string) ([]Post, error) {
// create a new connection to ES. Return, if there is any error.
client, err := elastic.NewClient(elastic.SetURL(ES_URL), elastic.SetSniff(false))
if err != nil {
return nil, err
}
// prepare a Geo-distance-based query to find all Posts inside a geo-box, name it as query.
query := elastic.NewGeoDistanceQuery("location")
query = query.Distance(ran).Lat(lat).Lon(lon)
// Get the results based on both POST_INDEX and query.
searchResult, err := client.Search().
Index(POST_INDEX).
Query(query).
Pretty(true).
Do(context.Background())
if err != nil {
return nil, err
}
fmt.Printf("Query took %d milliseconds\n", searchResult.TookInMillis)
// Iterate through all searchResult, if it's type of Post cast it to Post and append all Post to Post[] array.
var ptyp Post
var posts []Post
for _, item := range searchResult.Each(reflect.TypeOf(ptyp)) {
if p, ok := item.(Post); ok {
posts = append(posts, p)
}
}
// Get all Posts
return posts, nil
}
func handlerSearch(w http.ResponseWriter, r *http.Request) {
...
// Set response Header. Allow cross-domain visit.
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type,Authorization")
if r.Method == "OPTIONS" {
return
}
...
// call readFromES to get all posts
posts, err := readFromES(lat, lon, ran)
if err != nil {
http.Error(w, "Failed to read post from ElasticSearch", http.StatusInternalServerError)
fmt.Printf("Failed to read post from ElasticSearch %v.\n", err)
return
}
// parse posts datas into JSON format
js, err := json.Marshal(posts)
if err != nil {
http.Error(w, "Failed to parse posts into JSON format", http.StatusInternalServerError)
fmt.Printf("Failed to parse posts into JSON format %v.\n", err)
return
}
w.Write(js)
}
4. Setup GCS serving as media file storage.
4.1 What is GCS and why use it here.
- GCS is a powerful, simple and cost effective Object Storage Service. Unlike database, in object storage system, each piece of data is stored as an object with no tree-shape index. Datas are searched by using unique address/ path which ensures a higher efficiency.
- In this project, we need to store a lot of media files posted by users. Database is not suitable for storing binary (blob) media files, for two reasons: 1) It’ll be mush slower to store media files in database than in file system. 2) media files will increase the size of database a lot leading to a huge difficulty for its further maintenance.
- However, GCS behaves like a file system, it has a really good avaliability, scalability and durability. Also it provides CDN (Content Delivery Network) service to serve files in edge servers to reduce loading latency.
- Steps for setting up GCS please refer to the other post “Storing media files on Object Storage Services”.
- Import relative libraries and add the bucket name as a constant. Since we need a unique URL for searching the posted media fields, a new field needs to be added in the Post structure.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21import (
...
"io"
"cloud.google.com/go/storage"
)
const (
...
BUCKET_NAME = "YOUR_BUCKET_NAME" // the name you just created for GCS bucket.
)
type Post struct {
...
Url string `json:"url"`
}
4.2 Save Media files to GCS.
Then we need to define a new method to handle image-saving to GCS. Google Cloud Storage provides a good example written in Golang, just follow this Go example.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35func saveToGCS(r io.Reader, bucketName, objectName string) (*storage.ObjectAttrs, error) {
ctx := context.Background()
// Creates a client.
client, err := storage.NewClient(ctx)
if err != nil {
return nil, err
}
bucket := client.Bucket(bucketName)
if _, err := bucket.Attrs(ctx); err != nil {
return nil, err
}
object := bucket.Object(objectName)
wc := object.NewWriter(ctx)
if _, err = io.Copy(wc, r); err != nil {
return nil, err
}
if err := wc.Close(); err != nil {
return nil, err
}
if err = object.ACL().Set(ctx, storage.AllUsers, storage.RoleReader); err != nil {
return nil, err
}
attrs, err := object.Attrs(ctx)
if err != nil {
return nil, err
}
fmt.Printf("Image is saved to GCS: %s\n", attrs.MediaLink)
return attrs, nil
}The handlerPost() method also needs to be updated, to parse the multiform request, save data to ES and save media files to GCS.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36func handlerPost(w http.ResponseWriter, r *http.Request) {
...
p := &Post{
User: r.FormValue("user"),
Message: r.FormValue("message"),
Location: Location{
Lat: lat,
Lon: lon,
},
}
id := uuid.New()
file, _, err := r.FormFile("image")
if err != nil {
http.Error(w, "Image is not available", http.StatusBadRequest)
fmt.Printf("Image is not available %v.\n", err)
return
}
attrs, err := saveToGCS(file, BUCKET_NAME, id)
if err != nil {
http.Error(w, "Failed to save image to GCS", http.StatusInternalServerError)
fmt.Printf("Failed to save image to GCS %v.\n", err)
return
}
p.Url = attrs.MediaLink
err = saveToES(p, id)
if err != nil {
http.Error(w, "Failed to save post to ElasticSearch", http.StatusInternalServerError)
fmt.Printf("Failed to save post to ElasticSearch %v.\n", err)
return
}
fmt.Printf("Saved one post to ElasticSearch: %s", p.Message)
}
5. Setup BigTable, BigQuery and DataFlow.
5.1 BigTable
- Bigtable is a sparse, distributed, persistent multidimentional sorted MAP (which is a MAP/ a key-value pair storage). It won’t lost data.
- Bigtable is indexed by ROW_KEY, COLUMN_KEY and TIMESTAMP.
- Compare Bigtable vs. ES vs. Datastore: 1) ES may lost data, but BigTable won’t. 2) ES is more like a search engine with complicated query support, it’s good at data reading and searching. 3) BigQuery has a small scale, it’s used for offline analysis. 4) BT = NoSQL + Cloud. 5) Mongo = NoSQL. 6) ES = NoSQL + Query Optimization. 6) BQ = MySQL-like + Cloud. 7) DF = MapReduce + Cloud.
5.2 Setup BigTable in google cloud.
- Enable BigTable API.
- Create a BigTable instance.
- Install cbt in cloud terminal
sudo gcloud components install cbt
and setup relative configs:echo project = YOUR_PROJECT_ID > ~/.cbtrc
,echo instance = <YOUR_INSTANCE_NAME> >> ~/.cbtrc
. Setup the column and column family:cbt createtable post
->cbt createfamily post post
->cbt createfamily post location
. - Write data into BT:
go get cloud.google.com/go/bigtable
. - Follow Go BigTable example, create a new saveToBigTable() method and Update handlerPost():
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45// Update import
import (
...
"context"
"cloud.google.com/go/bigtable"
)
// Save a post to BigTable
func saveToBigTable(p *Post, id string) {
ctx := context.Background()
bt_client, err := bigtable.NewClient(ctx, YOUR_BIGTABLE_PROJECT_ID, YOUR_BT_INSTANCE)
if err != nil {
return err
}
tbl := bt_client.Open("post")
mut := bigtable.NewMutation()
t := bigtable.Now()
mut.Set("post", "user", t, []byte(p.User))
mut.Set("post", "message", t, []byte(p.Message))
mut.Set("location", "lat", t, []byte(strconv.FormatFloat(p.Location.Lat, 'f', -1, 64)))
mut.Set("location", "lon", t, []byte(strconv.FormatFloat(p.Location.Lon, 'f', -1, 64)))
err = tbl.Apply(ctx, id, mut)
if err != nil {
return err
}
fmt.Printf("Post is saved to BigTable: %s\n", p.Message)
return nil
}
func handlerPost(w http.ResponseWriter, r *http.Request) {
...
fmt.Printf( "Post is saved to Index: %s\n", p.Message)
err = saveToBigTable(p, id)
if err != nil {
http.Error(w, "Failed to save post to BigTable", http.StatusInternalServerError)
fmt.Printf("Failed to save post to BigTable %v.\n", err)
return
}
}
5.3 Google Cloud DataFlow.
- Google Cloud Dataflow is a unified programming model and a managed service for developing and executing a wide range of data processing patterns including ETL (Extract, Transform and Load), batch computation, and continuous computation.
- In this project, DataFlow is used for dump user data from BigTable to BigQuery for data analysts to analyze data.
5.4 Google Cloud BigQuery.
- BigQuery is a cloud-based version of SQL. It has a very similar syntax as MySQL.
5.5 Setup BQ and DF on GCP.
- Create a new Dataset for BigQuery under option “BIG DATA”.
- Create a new Bucket under Storage -> Browser.
- In cloud terminal, enter
mvn archetype:generate -DarchetypeArtifactId=google-cloud-dataflow-java-archetypes-examples -DarchetypeGroupId=com.google.cloud.dataflow -DarchetypeVersion=2.0.0 -DgroupId=com.around -DartifactId=dataflow -Dversion="0.1" -DinteractiveMode=false -Dpackage=com.around
. - vim into the
pom.xml
file and add two dependencies:1
2
3
4
5
6
7
8
9
10<dependency>
<groupId>com.google.cloud.dataflow</groupId>
<artifactId>google-cloud-dataflow-java-sdk-all</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>com.google.cloud.bigtable</groupId>
<artifactId>bigtable-hbase-dataflow</artifactId>
<version>0.9.5.1</version>
</dependency>
6. Authentication
6.1 Add HTTP method restriction
- To have different handlers for different http methods, we install a third party library gorilla/mux
go get github.com/gorilla/mux
. - Import it in our go file and update the main() function.
- Update handlerPost() and handlerSearch() to exit early for OPTIONS method.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51import (
...
"github.com/gorilla/mux"
)
func main() {
fmt.Println("started-service")
createIndexIfNotExist()
r := mux.NewRouter()
// restrict to only POST method or GET method
r.Handle("/post", http.HandlerFunc(handlerPost)).Methods("POST", "OPTIONS")
r.Handle("/search", http.HandlerFunc(handlerSearch)).Methods("GET", "OPTIONS")
http.Handle("/", r)
log.Fatal(http.ListenAndServe(":8080", nil))
}
func handlerPost(w http.ResponseWriter, r *http.Request) {
fmt.Println("Received one post request")
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type,Authorization")
if r.Method == "OPTIONS" {
return
}
lat, _ := strconv.ParseFloat(r.FormValue("lat"), 64)
lon, _ := strconv.ParseFloat(r.FormValue("lon"), 64)
...
}
func handlerSearch(w http.ResponseWriter, r *http.Request) {
fmt.Println("Received one request for search")
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type,Authorization")
if r.Method == "OPTIONS" {
return
}
lat, _ := strconv.ParseFloat(r.URL.Query().Get("lat"), 64)
lon, _ := strconv.ParseFloat(r.URL.Query().Get("lon"), 64)
...
}
6.2 implement authentication with JWT
Create a new Go file only to handle user registure and user login.
Install JWT-go library
go get github.com/dgrijalva/jwt-go
.Import all necessary packages.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"reflect"
"regexp"
"time"
jwt "github.com/dgrijalva/jwt-go"
"github.com/olivere/elastic"
)Add necessary const variables, and user struct.
1
2
3
4
5
6
7
8
9
10
11
12
13const (
USER_INDEX = "user"
USER_TYPE = "user"
)
type User struct {
Username string `json:"username"`
Password string `json:"password"`
Age int64 `json:"age"`
Gender string `json:"gender"`
}
var mySigningKey = []byte("secret")within User.go file, after setting the user struct, we need to save/check use-information to/from ElasticSearch (where we store all post datas). As well as handle the Signup/Login function to get users input from request.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142func addUser(user User) error {
client, err := elastic.NewClient(elastic.SetURL(ES_URL), elastic.SetSniff(false))
if err != nil {
return err
}
// select * from users where username = ...
query := elastic.NewTermQuery("username", user.Username)
searchResult, err := client.Search().
Index(USER_INDEX).
Query(query).
Pretty(true).
Do(context.Background())
if err != nil {
return err
}
if searchResult.TotalHits() > 0 {
return errors.New("User already exists")
}
_, err = client.Index().
Index(USER_INDEX).
Type(USER_TYPE).
Id(user.Username).
BodyJson(user).
Refresh("wait_for").
Do(context.Background())
if err != nil {
return err
}
fmt.Printf("User is added: %s\n", user.Username)
return nil
}
func checkUser(username, password string) error {
client, err := elastic.NewClient(elastic.SetURL(ES_URL), elastic.SetSniff(false))
if err != nil {
return err
}
query := elastic.NewTermQuery("username", username)
searchResult, err := client.Search().
Index(USER_INDEX).
Query(query).
Pretty(true).
Do(context.Background())
if err != nil {
return err
}
var utyp User
for _, item := range searchResult.Each(reflect.TypeOf(utyp)) {
if u, ok := item.(User); ok {
if username == u.Username && password == u.Password {
fmt.Printf("Login as %s\n", username)
return nil
}
}
}
return errors.New("Wrong username or password")
}
func handlerLogin(w http.ResponseWriter, r *http.Request) {
fmt.Println("Received one login request")
w.Header().Set("Content-Type", "text/plain")
w.Header().Set("Access-Control-Allow-Origin", "*")
if r.Method == "OPTIONS" {
return
}
decoder := json.NewDecoder(r.Body)
var user User
if err := decoder.Decode(&user); err != nil {
http.Error(w, "Cannot decode user data from client", http.StatusBadRequest)
fmt.Printf("Cannot decode user data from client %v.\n", err)
return
}
if err := checkUser(user.Username, user.Password); err != nil {
if err.Error() == "Wrong username or password" {
http.Error(w, "Wrong username or password", http.StatusUnauthorized)
} else {
http.Error(w, "Failed to read from ElasticSearch", http.StatusInternalServerError)
}
return
}
token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.MapClaims{
"username": user.Username,
"exp": time.Now().Add(time.Hour * 24).Unix(),
})
tokenString, err := token.SignedString(mySigningKey)
if err != nil {
http.Error(w, "Failed to generate token", http.StatusInternalServerError)
fmt.Printf("Failed to generate token %v.\n", err)
return
}
w.Write([]byte(tokenString))
}
func handlerSignup(w http.ResponseWriter, r *http.Request) {
fmt.Println("Received one signup request")
w.Header().Set("Content-Type", "text/plain")
w.Header().Set("Access-Control-Allow-Origin", "*")
if r.Method == "OPTIONS" {
return
}
decoder := json.NewDecoder(r.Body)
var user User
if err := decoder.Decode(&user); err != nil {
http.Error(w, "Cannot decode user data from client", http.StatusBadRequest)
fmt.Printf("Cannot decode user data from client %v.\n", err)
return
}
if user.Username == "" || user.Password == "" || !regexp.MustCompile (`^[a-z0-9_]+$`).MatchString(user.Username) {
http.Error(w, "Invalid username or password", http.StatusBadRequest)
fmt.Printf("Invalid username or password.\n")
return
}
if err := addUser(user); err != nil {
if err.Error() == "User already exists" {
http.Error(w, "User already exists", http.StatusBadRequest)
} else {
http.Error(w, "Failed to save to ElasticSearch", http.StatusInternalServerError)
}
return
}
w.Write([]byte("User added successfully."))
}
6.3 Add JWT to protect Post/Search endpoints.
Install jwt middleware package
go get github.com/auth0/go-jwt-middleware
and update imports.Besides geo-index, ES also need to create index for user to do the further user authentication search. So we also need to update the createIndexIfNotExist().
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16func createIndexIfNotExist() {
...
exists, err = client.IndexExists(USER_INDEX).Do(context.Background())
if err != nil {
panic(err)
}
if !exists {
_, err = client.CreateIndex(USER_INDEX).Do(context.Background())
if err != nil {
panic(err)
}
}
}Go back to update the main.go function, following the example add JWT middleware to router handler.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21func main() {
fmt.Println("started-service")
createIndexIfNotExist()
jwtMiddleware := jwtmiddleware.New(jwtmiddleware.Options{
ValidationKeyGetter: func(token *jwt.Token) (interface{}, error) {
return []byte(mySigningKey), nil
},
SigningMethod: jwt.SigningMethodHS256,
})
r := mux.NewRouter()
r.Handle("/post", jwtMiddleware.Handler(http.HandlerFunc(handlerPost))).Methods("POST", "OPTIONS")
r.Handle("/search", jwtMiddleware.Handler(http.HandlerFunc(handlerSearch))).Methods("GET", "OPTIONS")
r.Handle("/signup", http.HandlerFunc(handlerSignup)).Methods("POST", "OPTIONS")
r.Handle("/login", http.HandlerFunc(handlerLogin)).Methods("POST", "OPTIONS")
http.Handle("/", r)
log.Fatal(http.ListenAndServe(":8080", nil))
}Update handlerPost() following this example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27func handlerPost(w http.ResponseWriter, r *http.Request) {
...
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type,Authorization")
user := r.Context().Value("user")
claims := user.(*jwt.Token).Claims
username := claims.(jwt.MapClaims)["username"]
lat, _ := strconv.ParseFloat(r.FormValue("lat"), 64)
lon, _ := strconv.ParseFloat(r.FormValue("lon"), 64)
p := &Post{
User: username.(string),
Message: r.FormValue("message"),
Location: Location{
Lat: lat,
Lon: lon,
},
}
...
}