Understanding Method Sets in Go
Go is a statically typed, compiled programming language designed for simplicity and efficiency. One of its core concepts that every Go developer must grasp is the idea of method sets. This concept is pivotal in understanding how methods are attached to types and how they affect interface implementation. In this blog post, we’ll dive deep into method sets in Go, providing clear examples to illuminate their workings and implications.
What are Method Sets?
In Go, a method is a function that executes in the context of a type. A method set, on the other hand, is the collection of all the methods with a receiver of a particular type. The method set determines the interfaces that the type can implement and how the methods can be called.
Go differentiates between two types of receivers: value receivers and pointer receivers. This distinction plays a crucial role in method sets:
- Value receivers operate on copies of the original value. They can be called on both values and pointers of that type.
- Pointer receivers operate on the actual value (not a copy) and can only be called on pointers.
This differentiation leads to an important rule in Go’s method sets:
Examples of Method Sets
To illustrate the concept of method sets, let’s consider some examples.
- The method set of a type T consists of all methods declared with receiver type T.
- The method set of the pointer type *T includes all methods declared with receiver *T or T.
- This rule has a significant impact on interface implementation, as we will see later.
Example 1: Value Receiver
package main
import "fmt"
type Circle struct {
Radius float64
}
// Area method has a value receiver of type Circle
func (c Circle) Area() float64 {
return 3.14 * c.Radius * c.Radius
}
func main() {
c := Circle{Radius: 5}
fmt.Println("Area:", c.Area())
cPtr := &c
// Even though Area has a value receiver, it can be called on a pointer.
fmt.Println("Area through pointer:", cPtr.Area())
}
In this example, Area has a value receiver of type Circle. Hence, it can be called on both a Circle value and a pointer to Circle.
Example 2: Pointer Receiver
package main
import "fmt"
type Square struct {
Side float64
}
// Scale method has a pointer receiver of type *Square
func (s *Square) Scale(factor float64) {
s.Side *= factor
}
func main() {
sq := Square{Side: 4}
// Scale method can be called on a pointer
sqPtr := &sq
sqPtr.Scale(2)
fmt.Println("Scaled side:", sq.Side)
// Scale method can also be called on a value, the compiler implicitly takes the address
sq.Scale(2)
fmt.Println("Scaled side again:", sq.Side)
}
In this case, Scale has a pointer receiver. It can be called on both a Square value and a pointer to Square, with the compiler implicitly taking the address of sq when calling sq.Scale(2).
Interface Implementation and Method Sets
Method sets are crucial when it comes to interface implementation. A type T implements an interface by implementing its methods. However, a pointer type *T can also implement an interface by having methods with either receiver type.
Consider the following interface:
type Shaper interface {
Area() float64
}
For a type T to implement Shaper, it must have an Area method with a value receiver. However, if the Area method had a pointer receiver, then only *T (a pointer to T) would satisfy the Shaper interface.
Conclusion
Understanding method sets in Go is fundamental for effective Go programming, particularly when working with interfaces and method receivers. Remember that the method set of a type determines how methods can be called and what interfaces the type implements.
Lunch Bite post: HTTP Methods and Idempotency
HTTP, the foundation of data communication on the World Wide Web, employs a set of methods to define the desired action to be performed on a resource. Among these methods, GET, POST, PUT, and DELETE are fundamental, each serving a distinct purpose. One important concept associated with these methods is idempotency. In this blog post, we’ll delve into these HTTP methods, explore their idempotency, and provide illustrative examples.
HTTP Methods Overview:
1. GET:
The GET method is used to retrieve data from the specified resource. It should only retrieve data and not have any other effect on it at all.
Idempotency: GET requests are inherently idempotent. Repeated identical requests have the same effect as a single request and therefore no side effects whatsoever.
Example:
GET /users/123
2. POST:
POST is used to submit data to be processed to a specified resource. It often causes a change in state or side effects on the server.
Idempotency: POST requests are generally not idempotent. Repeated identical requests may lead to different outcomes, especially if they result in the creation of new resources.
Example:
POST /users Body: { “name”: “John Doe”, “email”: “john@example.com” }
3. PUT:
PUT is employed for updating a resource or creating it if it does not exist at the specified URI.
Idempotency: PUT requests are idempotent. Repeated identical requests should have the same effect as a single request.
Example:
PUT /users/123 Body: { “name”: “Updated Name”, “email”: “updated@example.com” }
4. PATCH:
PATCH is used to apply partial modifications to a resource.
Idempotency: PATCH requests are not guaranteed to be idempotent. Repeated identical requests may or may not have the same effect.
Example:
PATCH /users/123 Body: { “name”: “Updated Name” }
5. DELETE:
DELETE is used to request the removal of a resource at a specified URI.
Idempotency: DELETE requests are idempotent. Repeated identical requests should have the same effect as a single request.
Example:
DELETE /users/123
Understanding Idempotency:
Idempotency, in the context of HTTP methods, means that making a request multiple times produces the same result as making it once. In other words, subsequent identical requests should have no additional side effects beyond the first request.
Key Characteristics of Idempotency:
- Safety: Idempotent operations should not cause unintended side effects.
- Predictability: Repeating the same request yields the same result.
- Statelessness: The server’s state does not change between requests.
Idempotency simplifies error handling and recovery in distributed systems. If a request fails or times out, it can be safely retried without causing unexpected side effects. This property enhances the reliability and robustness of web services.
Idempotency and Retry Mechanism
Retry Strategies in Distributed Systems:
In distributed systems, network issues, temporary failures, or system timeouts are common challenges. When a request fails or times out, retrying the same request can be a natural response to recover from transient errors. However, incorporating retry strategies comes with its own set of complexities, and idempotency plays a crucial role in mitigating potential issues.
How Idempotency Enhances Retry Reliability:
- Prevents Unintended Side Effects:
- Idempotent operations ensure that repeating the same request does not result in unintended side effects or duplicate changes in the system.
- Without idempotency, retries might lead to the execution of the same non-idempotent operation multiple times, causing unexpected alterations in the system state.
- Consistent State:
- Idempotent operations provide a guarantee that the system state remains consistent even if a request is retried.
- Repeated execution of an idempotent operation yields the same result, preventing inconsistencies caused by partial or conflicting updates.
- Simplifies Error Handling:
- Idempotency simplifies error handling during retries. Failed requests can be retried without concerns about introducing inconsistencies or undesirable changes.
- Non-idempotent operations, when retried, may result in varying outcomes, making error recovery more challenging.
- Enhances Predictability:
- Idempotency ensures that the outcome of a retried operation is predictable. Developers can rely on the fact that repeating a request will have the same effect as the initial attempt.
- Predictability is crucial for designing robust and resilient systems, especially in scenarios where network failures or temporary glitches are common.
Implementing Idempotent Retry Strategies:
- Use Idempotent Operations:
- Design operations to be idempotent, especially those likely to be retried. This includes operations involving state changes or resource updates.
- Include Retry Identifiers:
- Implement mechanisms to include retry identifiers or tokens in requests to deduplicate retries. These identifiers can be used to recognize and discard duplicate attempts.
- Retry-After Headers:
- Utilize HTTP
Retry-Afterheaders to indicate the recommended time to wait before retrying a failed request. This helps prevent overwhelming the system with repeated immediate retries.
- Utilize HTTP
- Exponential Backoff:
- Apply exponential backoff strategies to gradually increase the time intervals between retry attempts. This prevents rapid and potentially harmful repeated retries.
Idempotency and Exponential Backoff:
Exponential backoff is a common strategy in retry mechanisms where the waiting time between consecutive retries increases exponentially. Idempotency complements this strategy by ensuring that, regardless of the retry delay, the outcome remains consistent. If the operation is idempotent, the impact of waiting longer before a retry is minimal, as the result will be the same.
Conclusion:
In summary, idempotency and retry mechanisms are intertwined concepts in distributed systems. Idempotent operations provide a foundation for reliable and predictable retries, preventing unintended side effects, maintaining consistent state, and simplifying error recovery. When designing systems that involve retry strategies, incorporating idempotent operations is a key practice for building robust and resilient architectures.
Why is Redis so fast?
Redis is an open-source in-memory database/data store, which means that its access is 1000 times faster than random disk access. Unlike traditional databases, Redis holds all the information in memory as opposed to performing continuous reads and writes. This allows for constant-time data access, which is crucial for operations requiring high speed. Redis uses Jemalloc, a general purpose memory allocation library that focuses on offering robust concurrency support as well as minimising memory fragmentation. While this seems rather a nice catchy sentence, minimising memory fragmentation offers several benefits, as follows:
- Improved performance. Fewer cache misses and potentially higher cache efficiency that can result in faster application execution as more data can be stored in faster-access memory spaces.
- Efficient usage of memory. By reducing fragmentation, a system can make more efficient use of its available memory. This means that more data can be stored within the same amount of physical memory, reducing the need for expensive memory allocation and deallocation operations.
- Reduced Memory Waste: Fragmentation leads to “holes” of unused memory scattered throughout the memory space, which can significantly waste valuable memory resources. Minimising fragmentation ensures that these holes are either eliminated or reduced, thus lowering the amount of wasted memory.
- Predictable Performance: Applications that rely on consistent and predictable performance benefit from reduced memory fragmentation, as it leads to more stable memory usage patterns. This predictability is crucial for real-time systems and high-performance computing applications where timing is critical.
- Longer Runtime without Degradation: Systems that run for extended periods without restarting (such as servers) can suffer from increased fragmentation over time, which can degrade performance. Allocators that minimise fragmentation can help maintain consistent performance over longer runtimes.
- Enhanced Stability and Reliability: By ensuring efficient memory usage and reducing the chances of memory allocation failures due to fragmentation, systems can achieve higher stability and reliability. This is particularly important in embedded systems and critical applications where failures can have significant consequences.
The fact that Redis is so fast is, however, due to a combination of factors beyond the speed of in-memory data storage. For example:
- Single-Threaded Architecture: Redis follows a single-threaded architecture, which removes the need for context switching and synchronisation between threads. This leads to reduced overhead and increased performance. This might seem rather counter-intuitive, but, in simple terms, it means that Redis handles tasks without the complications of multiple threads. This setup, combined with smart use of system features like event loops and I/O multiplexing, allows Redis to efficiently manage many connections and operations at once.
- Non-Blocking I/O: Redis employs non-blocking I/O techniques, enabling it to manage mutiple connections at once without halting other processes, which helps maintain swift request handling, even when the system is under significant load.
- Optimised Data Structures: Redis provides optimised data structures like Lists, Sets, Sorted Lists, Hashes, Bitmaps, Bitfields, Streams, etc which are designed to perform specific operations efficiently. For example: with Sorted Sets it is possible set operations such as union, intersection, set difference, and compute cardinality.
- Lua Scripting: Redis offers Lua scripting support, enabling developers to craft intricate commands and run them directly on the server. This feature minimises network delay and latency and boosts efficiency by executing scripts server-side.
References:
Confused about Parallelism and Concurrency? Stop here!
Rob Pike, one of Golang’s dads, 8 years ago gave one of the best programming talks I am aware of at Heroku’s Waza conference titled: “Concurrency is not Parallelism”. The beauty of this talk is his clear compare and contrast exercise between Concurrency and Parallelism whereby Concurrency is defined as:
“Programming as the composition of independently executing processes”
as opposed to Parallelism that is:
“Programming as the simultaneous execution of (possibly related) computations”
Enjoy the video, it is pure gold!
https://www.youtube.com/watch?v=oV9rvDllKEg
Slides: https://go.dev/talks/2012/waza.slide#1
Also the illuminating talk at Google I/0 2012 on Concurrency patter: https://www.youtube.com/watch?v=f6kdp27TYZs
A try-catch business in Go
An interesting article on how to use try-catch in Go. Go treats errors as normal return values, promoting a more predictable and straightforward control flow. This provides illustrates Go’s approach to error handling by reading a file and handling a panic situation, emphasising the use of defer and recover for managing fatal errors.
https://matheuspolitano.medium.com/how-use-try-catch-mechanism-in-golang-b1f97de62b9b
6 Prometheus Monitoring Mistakes to avoid
From Julius Volz, co-founder or prometheus.io and PromLabs, 6 mistakes to avoid if you are new, or even not so much, to Prometheus:
– High Cardinality, aka the number of unique time series as stored in the db;
– Aggregating too many labels/dimensions – usage of the aggregation operators such as by;
– Unscoped Metric Selectors – this would cause conflicts if we do not target the specific endpoint or service we are interested in;
– Missing for Durations in Alerting Rules – to protect against data that might present gaps;
– Short Rate windows – to calculate a trend Prometheus needs at least 2 samples, therefore too short windows might not provide you with the required samples;
– Using rate with the wrong metric type – For example, rate, irate and increase only work with counters!
https://www.youtube.com/watch?v=NEMsO1qeI1s
Monorepos
At work, I was recently introduced to the idea of Monorepos. While developer who have been writing code for microservices tend to think of microservice architecture as a number of x codebases somehow interacting with each other, there is another school of thoughts that is pushing the idea of organising code in a single repository. Do not fret, that does not mean that we want to create a monolithic application, tightly coupled, aka a giant and far from being modular piece of software. It only means that, instead of having the code residing in different repositories and having to tweak and make changes in each of one making sure that integration tests are still green and happy, everything is in one codebase and it generally holds the following features:
- Only build the services or cmds that are modified in a pushed commit;
- Build all services and/or cmds that are affected by changes in common codes (i.e.
pkg); - Build all services and/or cmds that are affected by changes in
vendorcodes.
https://medium.com/goc0de/how-to-golang-monorepo-4f62320a01fd
https://circleci.com/blog/monorepo-dev-practices/
https://hardyantz.medium.com/getting-started-monorepo-golang-application-with-bazel-370ed1069b4f
ApacheCon 2021 has arrived!
ApacheCon, the global Apache conference, has arrived and it landed in your own very living room. That is possibly one of the few times in the past year and half where you might find yourself saying “Thank you COVID!”. ApacheCon is not only fully online, but it is also free (unless you want to donate to Apache Foundation, which is all well and good). Here is the link to register:
https://www.apachecon.com/acah2021/
Have fun!
You must be logged in to post a comment.