⬆️ ⬇️

Writing a web service on Go (part two)

Continuing the article on how to write a small full-featured application on Go.



In the first part, we implemented the REST API and learned how to collect incoming HTTP requests. In this part, we will cover our application with tests, add a beautiful web interface based on AngularJS and Bootstrap, and introduce access restriction for different users.





In this part, we are waiting for the following stages:

  1. Step Four. But what about the tests?
  2. Step five - decorations and a web interface;
  3. Step Six. Add a little privacy.
  4. Step Seven. We clear the unnecessary;
  5. Step Eight. Use Redis for storage.


')

Step Four. But what about the tests?



Any application should be covered with tests, no matter what size it is. Go has a large number of built-in tools for working with tests. You can write as a normal unit tests (unit tests), and, for example, performance tests (benchmark tests). Also, the toolkit allows you to view code coverage tests.



The basic package for working with tests is testing . The two main types here are T for normal unit tests and B for load tests. Tests in Go are written in the same package as the main program, with the addition of the _test suffix. Therefore, any private data structures available inside the package are also available inside the tests (it is also true that tests have a common global scope between them). When compiling the main program, test files are ignored.



In addition to the basic package of testing, there are a large number of third-party libraries that help simplify the writing of tests or allow writing in a particular style (even in the style of BDD ). For example, here’s a good introductory article on how to write TD in Go style.



On GitHub, there is a benchmark for comparing test libraries, among which are monsters such as goconvey , which also provides a web-based interface and interaction with the system, such as test notifications. But, in order not to complicate things, for our project we will take a small testify library that adds only a few primitives to check conditions and create mock objects.



Download the code for the fourth step:



 git checkout step-4 


Let's start by writing tests for models. Create a file models_test.go. To be detected by the go test utility, functions with tests must satisfy the following pattern:



 func TestXxx(*testing.T) 


Let's write our first test that will check for the proper creation of a Bin object:



 func TestNewBin(t *testing.T) { now := time.Now().Unix() bin := NewBin() if assert.NotNil(t, bin) { assert.Equal(t, len(bin.Name), 6) assert.Equal(t, bin.RequestCount, 0) assert.Equal(t, bin.Created, bin.Updated) assert.True(t, bin.Created < (now+1)) assert.True(t, bin.Created > (now-1)) } } 


All testify methods in testify accept * testing.T object as the first parameter.

Next, we test all the scenarios, not forgetting the erroneous paths and boundary values. I will not give the code of all the tests in the article, as there are quite a lot of them, and you can read them in the repository, I’ll only touch on the most interesting moments.



Pay attention to the file api_test.go, in it we are testing our REST API. In order not to depend on the implementations of our data storage, we add a mock object that implements the behavior of the Storage interface. We do this with the testify mock package . It provides a mechanism for easily writing mock objects, which can then be used instead of real objects when writing tests.



Here is his code:



 type MockedStorage struct{ mock.Mock } func (s *MockedStorage) CreateBin(_ *Bin) error { args := s.Mock.Called() return args.Error(0) } func (s *MockedStorage) UpdateBin(bin *Bin) error { args := s.Mock.Called(bin) return args.Error(0) } func (s *MockedStorage) LookupBin(name string) (*Bin, error) { args := s.Mock.Called(name) return args.Get(0).(*Bin), args.Error(1) } func (s *MockedStorage) LookupBins(names []string) ([]*Bin, error) { args := s.Mock.Called(names) return args.Get(0).([]*Bin), args.Error(1) } func (s *MockedStorage) LookupRequest(binName, id string) (*Request, error) { args := s.Mock.Called(binName, id) return args.Get(0).(*Request), args.Error(1) } func (s *MockedStorage) CreateRequest(bin *Bin, req *Request) error { args := s.Mock.Called(bin) return args.Error(0) } func (s *MockedStorage) LookupRequests(binName string, from, to int) ([]*Request, error) { args := s.Mock.Called(binName, from, to) return args.Get(0).([]*Request), args.Error(1) } 


Further, in the tests themselves, when creating the API, we inject our mock object:



  req, _ := http.NewRequest("GET", "/api/v1/bins/", nil) api = GetApi() mockedStorage := &MockedStorage{} api.MapTo(mockedStorage, (*Storage)(nil)) res = httptest.NewRecorder() mockedStorage.On("LookupBins", []string{}).Return([]*Bin(nil), errors.New("Storage error")) api.ServeHTTP(res, req) mockedStorage.AssertExpectations(t) if assert.Equal(t, res.Code, 500) { assert.Contains(t, res.Body.String(), "Storage error") } 


In the test, we describe the expected requests to the mock object and the answers we need. Therefore, when we call the s.Mock.Called(names) method inside the mock method of an object, it tries to match the specified parameters and method name, and when we return args.Get (0), the first argument returned is returned. , in this case realBin. In addition to the Get method, which returns an object of type interface {}, there are helper methods Int, String, Bool, Error, which transform the interface into the type we need. The mockedStorage.AssertExpectations (t) method checks if all expected methods were called by us during testing.



The ResponseRecorder object created in httptest.NewRecorder is also interesting here, it implements the behavior of the ResponseWriter and allows us, without outputting the request data anywhere, to see what eventually returns (response code, headers and response body).



To run the tests, run the command:



 > go test ./src/skimmer ok _/.../src/skimmer 0.032s 


The test run team has a large number of flags, you can read them like this:



 > go help testflag 


You can play with them, but now we are interested in the following command (relevant for Go version 1.2):



 > go test ./src/skimmer/ -coverprofile=c.out && go tool cover -html=c.out 


If you didn’t work, you may need to first install the coverage tool.



 > go get code.google.com/p/go.tools/cmd/cover 


This command performs the tests and saves the test coverage profile to the c.out file, and then the go tool utility creates an html version that opens in the browser.

Coverage tests in Go, implemented quite interesting. Before compiling the code, the source files are changed, the counters are inserted into the source code. For example, this is the code:



 func Size(a int) string { switch { case a < 0: return "negative" case a == 0: return "zero" } return "enormous" } 


turns into this:



 func Size(a int) string { GoCover.Count[0] = 1 switch { case a < 0: GoCover.Count[2] = 1 return "negative" case a == 0: GoCover.Count[3] = 1 return "zero" } GoCover.Count[1] = 1 return "enormous" } 


It is also possible to show not just coverage, but how many times each section of code is being tested. As always, you can read more in the documentation .


Now that we have a full-fledged REST API, and it is also covered with tests, we can start embellishing and building a web interface.



Step five - decorations and web interface.



Go comes a full-fledged library for working with html templates , but we will make a so-called one-page application that works directly with the API via javascript. Will help us in this AngularJS .



Update the code for the new step:



 > git checkout step-5 


As mentioned in the first chapter, Martini has a handler for distributing statics, by default it distributes static files from the public directory. Put there the necessary js and css libraries. I will describe the work of the frontend, since this is not the purpose of this article, you can look at the source files yourself, for people familiar with angular, everything is quite simple.



To display the main page, we add a separate handler:



  api.Get("**", func(r render.Render){ r.HTML(200, "index", nil) }) 




Glob ** characters say that an index.html file will be displayed for any address. To work correctly with templates, we added options when creating a Renderer, indicating where to get templates. Plus, to avoid conflicts with angular templates, reassigned {{}} to {[{}]}.



  api.Use(render.Renderer(render.Options{ Directory: "public/static/views", Extensions: []string{".html"}, Delims: render.Delims{"{[{", "}]}"}, })) 




In addition, the colors (three bytes that store the RGB color value) and the Favicon (data uri picture, need colors) fields that were randomly generated when creating an object were added to the Bin model to distinguish different bin objects by color.



 type Bin struct { ... Color [3]byte `json:"color"` Favicon string `json:"favicon"` } func NewBin() *Bin { color:= RandomColor() bin := Bin{ ... Color: color, Favicon: Solid16x16gifDatauri(color), } ... } 


Now we have an almost full-featured web application, we can run it:



 > go run ./src/main.go 


And open in browser ( 127.0.0.1:3000 To play around.



Unfortunately, the application still has two problems: after the program is completed, all data is lost and we have no separation by users, everyone sees the same thing. Well, let's do it.



Step Six. Add a little privacy.


Download the code for the sixth step:



 > git checkout step-6 


Separate users from each other, we will be using sessions. To begin, choose where to store them. Sessions in martini-contrib are based on gorilla web library sessions.

Gorilla is a set of tools for implementing web frameworks. All of these tools are weakly interconnected, which allows you to take any part and build it to yourself.


This allows us to use the repositories already implemented in gorilla. Ours will be cookie based.



Create a session repository:



 func GetApi(config *Config) *martini.ClassicMartini { ... store := sessions.NewCookieStore([]byte(config.SessionSecret)) ... 


The NewCookieStore function accepts a pair of keys as parameters, the first key in the pair is needed for authentication, and the second for encryption. The second key can be skipped. To be able to rotate keys without losing sessions, you can use several pairs of keys. When creating a session, the keys of the first pair will be used, but when checking data, all keys are used in order, starting with the first pair.


Since we need different keys for applications, we will move this parameter to the Config object, which later will help us customize the application based on the environmental parameters or launch flags.



Add an intermediate handler to our API that adds work with sessions:



 // Sessions is a Middleware that maps a session.Session service into the Martini handler chain. // Sessions can use a number of storage solutions with the given store. func Sessions(name string, store Store) martini.Handler { return func(res http.ResponseWriter, r *http.Request, c martini.Context, l *log.Logger) { // Map to the Session interface s := &session{name, r, l, store, nil, false} c.MapTo(s, (*Session)(nil)) // Use before hook to save out the session rw := res.(martini.ResponseWriter) rw.Before(func(martini.ResponseWriter) { if s.Written() { check(s.Session().Save(r, res), l) } }) ... c.Next() } } 


As can be seen from the code, a session is created for each request and added to the context of the request. At the end of the request, just before the data from the buffer is written, the session data is saved if it has been changed.



Now let's rewrite our history (which used to be just a slice), the history.go file:



 type History interface { All() []string Add(string) } type SessionHistory struct { size int name string session sessions.Session data []string } func (history *SessionHistory) All() []string { if history.data == nil { history.load() } return history.data } func (history *SessionHistory) Add(name string) { if history.data == nil { history.load() } history.data = append(history.data, "") copy(history.data[1:], history.data) history.data[0] = name history.save() } func (history *SessionHistory) save() { size := history.size if size > len(history.data){ size = len(history.data) } history.session.Set(history.name, history.data[:size]) } func (history *SessionHistory) load() { sessionValue := history.session.Get(history.name) history.data = []string{} if sessionValue != nil { if values, ok := sessionValue.([]string); ok { history.data = append(history.data, values...) } } } func NewSessionHistoryHandler(size int, name string) martini.Handler { return func(c martini.Context, session sessions.Session) { history := &SessionHistory{size: size, name: name, session: session} c.MapTo(history, (*History)(nil)) } } 


In the NewSessionHistoryHandler method, we create a SessionHistory object that implements the History interface (describing the addition and query of all history objects), and then add it to the context of each query. The SessionHistory object has load and save helper methods that load and save data into the session. And loading data from the session is done only on demand. Now, in all API methods where a history slice was used before, a new History type object will be used.



From this point on, each user will have his own history of Bin objects displayed, but through a direct link we can still see any Bin. Fix this by adding the ability to create private Bin objects.



Create two new fields in Bin:



 type Bin struct { ... Private bool `json:"private"` SecretKey string `json:"-"` } 


The key will be stored in the SecretKey field, giving access to private Bins (where the Private flag is true). Add the same method that makes our object private:



 func (bin *Bin) SetPrivate() { bin.Private = true bin.SecretKey = rs.Generate(32) } 


In order to create private Bins, our frontend, when creating an object, will send a json object with the private flag. To parse the incoming json, we wrote a small method DecodeJsonPayload, which reads the request body and unpacks it into the structure we need:



 func DecodeJsonPayload(r *http.Request, v interface{}) error { content, err := ioutil.ReadAll(r.Body) r.Body.Close() if err != nil { return err } err = json.Unmarshal(content, v) if err != nil { return err } return nil } 


Change the API now to implement the new behavior:



  api.Post("/api/v1/bins/", func(r render.Render, storage Storage, history History, session sessions.Session, req *http.Request){ payload := Bin{} if err := DecodeJsonPayload(req, &payload); err != nil { r.JSON(400, ErrorMsg{fmt.Sprintf("Decoding payload error: %s", err)}) return } bin := NewBin() if payload.Private { bin.SetPrivate() } if err := storage.CreateBin(bin); err == nil { history.Add(bin.Name) if bin.Private { session.Set(fmt.Sprintf("pr_%s", bin.Name), bin.SecretKey) } r.JSON(http.StatusCreated, bin) } else { r.JSON(http.StatusInternalServerError, ErrorMsg{err.Error()}) } }) 


First, we create a payload object of type Bin, whose fields will be filled with values ​​in the DecodeJsonPayload function from the request body. After that, if the “private” option is set in the incoming data, we make our bin private. Next, for private objects, we save the key value to the session session. session.Set(fmt.Sprintf("pr_%s", bin.Name), bin.SecretKey) . Now we need to change other API methods so that they check for the existence of a key in a session for private Bin objects.



This is done approximately like this:



  api.Get("/api/v1/bins/:bin", func(r render.Render, params martini.Params, session sessions.Session, storage Storage){ if bin, err := storage.LookupBin(params["bin"]); err == nil{ if bin.Private && bin.SecretKey != session.Get(fmt.Sprintf("pr_%s", bin.Name)){ r.JSON(http.StatusForbidden, ErrorMsg{"The bin is private"}) } else { r.JSON(http.StatusOK, bin) } } else { r.JSON(http.StatusNotFound, ErrorMsg{err.Error()}) } }) 


By analogy done in other methods. Some tests were also corrected to take into account the new behavior, specific changes can be viewed in the code.



If you run our application now in different browsers or in incognito mode, you can make sure that the story is different, and only the browser in which it is created has access to private Bin objects.



Everything is good, but now all the objects in our storage live almost forever, which is probably not correct, since there cannot be eternal memory, so we will try to limit the time of their life.



Step Seven. We clear the unnecessary.





Download the code for the seventh step:



 git checkout step-7 


Add another field to the base storage structure:



 type BaseStorage struct { ... binLifetime int64 } 


It will store the maximum lifetime of the object Bin and its attendant requests. Now let's rewrite our storage in memory - memory.go. The main method to clean all binRecords that have not been updated more than binLifetime seconds:



 func (storage *MemoryStorage) clean() { storage.Lock() defer storage.Unlock() now := time.Now().Unix() for name, binRecord := range storage.binRecords { if binRecord.bin.Updated < (now - storage.binLifetime) { delete(storage.binRecords, name) } } } 


We also add a timer and methods for working with it to the MemoryStorage type:



 type MemoryStorage struct { ... cleanTimer *time.Timer } func (storage *MemoryStorage) StartCleaning(timeout int) { defer func(){ storage.cleanTimer = time.AfterFunc(time.Duration(timeout) * time.Second, func(){storage.StartCleaning(timeout)}) }() storage.clean() } func (storage *MemoryStorage) StopCleaning() { if storage.cleanTimer != nil { storage.cleanTimer.Stop() } } 




The package method time AfterFunc starts the specified function in a separate gorutin (it must be without parameters, so we will use the closure to pass the timeout here) after a timeout, such as time.Duration, passed in the first argument.



To scale our application horizontally, we will need to run it on different servers, so we will need a separate storage for our data. Take for example - Redis.



Step Eight. Use Redis for storage.



Official Redis documentation advises us an extensive list of clients for Go. At the time of this writing, the recommended are radix and redigo . We will choose redigo, as it is being actively developed and has a larger community.



Let's go to the desired code:



 git checkout step-8 


Let's look in the redis.go file, in it there will be our implementation of Storage for Redis. The basic structure is quite simple:



 type RedisStorage struct { BaseStorage pool *redis.Pool prefix string cleanTimer *time.Timer } 


The pool of radish connections will be stored in the pool, in prefix - the common prefix for all keys. To create a pool, take the code from the redigo examples:



 func getPool(server string, password string) (pool *redis.Pool) { pool = &redis.Pool{ MaxIdle: 3, IdleTimeout: 240 * time.Second, Dial: func() (redis.Conn, error) { c, err := redis.Dial("tcp", server) if err != nil { return nil, err } if password != "" { if _, err := c.Do("AUTH", password); err != nil { c.Close() return nil, err } } return c, err }, TestOnBorrow: func(c redis.Conn, _ time.Time) error { _, err := c.Do("PING") return err }, } return pool } 


In the Dial, we pass a function that, after connecting to the Redis server, will try to log in if a password is specified. After this, the established connection is returned. The TestOnBorrow function is called when a connection is requested from the pool, in it you can check the connection for viability. The second parameter is the time since the connection was returned to the pool. We just send ping every time.



Also in the package we have declared several constants:



 const ( KEY_SEPARATOR = "|" //   BIN_KEY = "bins" //     Bin REQUESTS_KEY = "rq" //      REQUEST_HASH_KEY = "rhsh" //        CLEANING_SET = "cln" // ,      Bin   CLEANING_FACTOR = 3 //      ) 


The keys we get here are as follows:



 func (storage *RedisStorage) getKey(keys ...string) string { return fmt.Sprintf("%s%s%s", storage.prefix, KEY_SEPARATOR, strings.Join(keys, KEY_SEPARATOR)) } 




To store our data in radish, they need to be serialized with something. We will select the popular msgpack format and use the popular codec library.



We describe the methods that serialize everything that is possible into binary data and vice versa:



 func (storage *RedisStorage) Dump(v interface{}) (data []byte, err error) { var ( mh codec.MsgpackHandle h = &mh ) err = codec.NewEncoderBytes(&data, h).Encode(v) return } func (storage *RedisStorage) Load(data []byte, v interface{}) error { var ( mh codec.MsgpackHandle h = &mh ) return codec.NewDecoderBytes(data, h).Decode(v) } 


We now describe other methods.



Creating a Bin Object


 func (storage *RedisStorage) UpdateBin(bin *Bin) (err error) { dumpedBin, err := storage.Dump(bin) if err != nil { return } conn := storage.pool.Get() defer conn.Close() key := storage.getKey(BIN_KEY, bin.Name) conn.Send("SET", key, dumpedBin) conn.Send("EXPIRE", key, storage.binLifetime) conn.Flush() return err } func (storage *RedisStorage) CreateBin(bin *Bin) error { if err := storage.UpdateBin(bin); err != nil { return err } return nil } 




First we serialize the bin using the Dump method. Then we take the radish compound from the pool (remembering to return it with the help of defer).

Redigo supports pipeline mode, we can add a command to the buffer via the Send method, send all data from the buffer using the Flush method and get the result in Receive. The Do command combines all three commands into one. You can also implement transactivity, read more in the redigo documentation .


We send two commands, “SET” to save the Bin data by its name and Expire to set the lifetime of this record.



Getting a Bin Object


 func (storage *RedisStorage) LookupBin(name string) (bin *Bin, err error) { conn := storage.pool.Get() defer conn.Close() reply, err := redis.Bytes(conn.Do("GET", storage.getKey(BIN_KEY, name))) if err != nil { if err == redis.ErrNil { err = errors.New("Bin was not found") } return } err = storage.Load(reply, &bin) return } 


The helper method redis.Bytes attempts to read the response from conn.Do into an array of bytes. If the object was not found, the radish will return a special type of error redis.ErrNil. If everything went well, the data is loaded into the bin object, passed by reference to the Load method.



Get a list of Bin objects


 func (storage *RedisStorage) LookupBins(names []string) ([]*Bin, error) { bins := []*Bin{} if len(names) == 0 { return bins, nil } args := redis.Args{} for _, name := range names { args = args.Add(storage.getKey(BIN_KEY, name)) } conn := storage.pool.Get() defer conn.Close() if values, err := redis.Values(conn.Do("MGET", args...)); err == nil { bytes := [][]byte{} if err = redis.ScanSlice(values, &bytes); err != nil { return nil, err } for _, rawbin := range bytes { if len(rawbin) > 0 { bin := &Bin{} if err := storage.Load(rawbin, bin); err == nil { bins = append(bins, bin) } } } return bins, nil } else { return nil, err } } 


Here, almost everything is the same as in the previous method, except that the MGET command is used to obtain the data slice and the auxiliary method redis.ScanSlice to load the response into the slice of the desired type.



Create Request Request


 func (storage *RedisStorage) CreateRequest(bin *Bin, req *Request) (err error) { data, err := storage.Dump(req) if err != nil { return } conn := storage.pool.Get() defer conn.Close() key := storage.getKey(REQUESTS_KEY, bin.Name) conn.Send("LPUSH", key, req.Id) conn.Send("EXPIRE", key, storage.binLifetime) key = storage.getKey(REQUEST_HASH_KEY, bin.Name) conn.Send("HSET", key, req.Id, data) conn.Send("EXPIRE", key, storage.binLifetime) conn.Flush() requestCount, err := redis.Int(conn.Receive()) if err != nil { return } if requestCount < storage.maxRequests { bin.RequestCount = requestCount } else { bin.RequestCount = storage.maxRequests } bin.Updated = time.Now().Unix() if requestCount > storage.maxRequests * CLEANING_FACTOR { conn.Do("SADD", storage.getKey(CLEANING_SET), bin.Name) } if err = storage.UpdateBin(bin); err != nil { return } return } 


First, we save the request ID to the request list for bin.Name, then save the serialized request to a hash table. Do not forget in both cases add time to life. The LPUSH command returns the number of entries in the requestCount list; if this number exceeds the maximum multiplied by a factor, then add this Bin to the candidates for the next cleanup.



The receipt of the request and the list of requests are made by analogy with Bin objects.



Cleaning


 func (storage *RedisStorage) clean() { for { conn := storage.pool.Get() defer conn.Close() binName, err := redis.String(conn.Do("SPOP", storage.getKey(CLEANING_SET))) if err != nil { break } conn.Send("LRANGE", storage.getKey(REQUESTS_KEY, binName), storage.maxRequests, -1) conn.Send("LTRIM", storage.getKey(REQUESTS_KEY, binName), 0, storage.maxRequests-1) conn.Flush() if values, error := redis.Values(conn.Receive()); error == nil { ids := []string{} if err := redis.ScanSlice(values, &ids); err != nil { continue } if len(ids) > 0 { args := redis.Args{}.Add(storage.getKey(REQUEST_HASH_KEY, binName)).AddFlat(ids) conn.Do("HDEL", args...) } } } } 


Unlike MemoryStorage, here we clear redundant requests, since the lifetime is limited by the radish command EXPIRE. First, we take an element from the list for purging, request identifiers of requests for it that are not included in the limit, and use the LTRIM command to compress the list to the size we need. We remove previously obtained identifiers from the hash table using the HDEL command, which accepts several keys at once.



We have finished describing RedisStorage, next to it, in the redis_test.go file you will find the same tests.



Now, let's add the ability to select the storage when launching our application, in the api.go file:



 type RedisConfig struct { RedisAddr string RedisPassword string RedisPrefix string } type Config struct { ... Storage string RedisConfig } func GetApi(config *Config) *martini.ClassicMartini { var storage Storage switch config.Storage{ case "redis": redisStorage := NewRedisStorage(config.RedisAddr, config.RedisPassword, config.RedisPassword, MAX_REQUEST_COUNT, BIN_LIFETIME) redisStorage.StartCleaning(60) storage = redisStorage default: memoryStorage := NewMemoryStorage(MAX_REQUEST_COUNT, BIN_LIFETIME) memoryStorage.StartCleaning(60) storage = memoryStorage } ... 


We have added a new Storage field to our configuration structure and, depending on it, initialized with either RedisStorage or MemoryStorage. Also added a RedisConfig configuration for specific radish options.



We will also make changes in the main.go file launched:

 import ( "skimmer" "flag" ) var ( config = skimmer.Config{ SessionSecret: "secret123", RedisConfig: skimmer.RedisConfig{ RedisAddr: "127.0.0.1:6379", RedisPassword: "", RedisPrefix: "skimmer", }, } ) func init() { flag.StringVar(&config.Storage, "storage", "memory", "available storages: redis, memory") flag.StringVar(&config.SessionSecret, "sessionSecret", config.SessionSecret, "") flag.StringVar(&config.RedisAddr, "redisAddr", config.RedisAddr, "redis storage only") flag.StringVar(&config.RedisPassword, "redisPassword", config.RedisPassword, "redis storage only") flag.StringVar(&config.RedisPrefix, "redisPrefix", config.RedisPrefix, "redis storage only") } func main() { flag.Parse() api := skimmer.GetApi(&config) api.Run() } 




We will use the flag package , which makes it easy to add startup parameters for programs. Add the “storage” flag to the init function, which will store the value directly in our config in the Storage field. Also add the launch options for radish.

The init function is special for Go, it is always executed when the package is loaded. Learn more about running programs in Go.


Now, by running our program with the --help parameter, we will see a list of available parameters:



 > go run ./src/main.go --help Usage of .../main: -redisAddr="127.0.0.1:6379": redis storage only -redisPassword="": redis storage only -redisPrefix="skimmer": redis storage only -sessionSecret="secret123": -storage="memory": available storages: redis, memory 




Now we have an application, which is still quite raw, and not optimized, but already ready for work and launch on servers.



In the third part, we will talk about displaying and running the application in GAE, Cocaine and Heroku, as well as how to distribute it as a single executable file containing all the resources. We will write performance tests, while at the same time doing optimization. Let's learn how to proxy requests and respond to the right data. And finally, let's build the groupcache distributed database right inside the application.



I would welcome any changes and suggestions on the article.

Source: https://habr.com/ru/post/214425/



All Articles