Author Archives: icorda
RAG and Vector Databases
In recent years, advancements in AI have made it possible for machines to generate human-like text, answer questions, and assist in various complex tasks. One approach gaining traction is Retrieval-Augmented Generation (RAG). In this blog, we’ll introduce RAG and explain how it works hand-in-hand with vector databases to deliver accurate and contextually relevant results.
What is Retrieval-Augmented Generation (RAG)?
At its core, RAG is a framework that combines two powerful techniques:
- Retrieval: Finding relevant pieces of information from a database or knowledge source.
- Generation: Using an AI model, such as GPT, to generate responses or content based on the retrieved information.
By merging these steps, RAG addresses one of the main limitations of standalone generative AI models: their reliance on pre-existing knowledge. Instead of generating answers from potentially outdated training data, RAG enhances the model’s ability to retrieve the most recent and relevant information before responding.
Why Do We Need RAG?
Here are some key reasons for using RAG:
- Accuracy: By retrieving real-time or domain-specific data, RAG improves the factual correctness of responses.
- Context-Awareness: It enables models to handle niche or highly specialized queries that require external knowledge.
- Scalability: RAG can handle vast datasets, leveraging retrieval to limit the amount of information the model needs to process.
The Role of Vector Databases in RAG
To understand how RAG works, let’s focus on the “Retrieval” step. Instead of searching plain text, modern systems use vector databases. These databases allow for fast and efficient searches through embeddings—mathematical representations of data.
What Are Vector Databases?
Traditional databases organize information in rows and columns, but they struggle with finding “semantic” matches—those based on meaning rather than exact keywords. Vector databases solve this problem by storing embeddings.
- Embeddings: These are numerical representations of data (like words, sentences, or images) created by AI models. Similar pieces of data are close together in the embedding space.
- Vector Search: Instead of keyword matching, vector databases find the closest match to a query in this embedding space.
How RAG Uses Vector Databases
Here’s how the RAG process works step-by-step:
- Create Embeddings: Data (documents, text snippets, etc.) is converted into embeddings using AI models.
- Store Embeddings: These embeddings are stored in a vector database.
- Retrieve Information: When a user asks a question, their query is also converted into an embedding and matched against the stored embeddings to find the most relevant pieces of information.
- Generate Responses: The retrieved data is passed to a generative AI model, which uses it to craft a response.
Benefits of Combining RAG and Vector Databases
- Fast and Efficient Retrieval: Vector databases ensure quick access to relevant information, even in large datasets.
- Enhanced Model Performance: By providing specific, retrieved context, generative models produce more accurate and coherent responses.
- Adaptability: The system can be updated by simply adding new data to the database, without retraining the AI model.
Example Use Case: A Customer Support Bot
Imagine a company with a knowledge base of FAQs, guides, and troubleshooting documents. Using RAG:
- The bot retrieves the most relevant documents from the vector database based on the customer’s query.
- It passes these documents to a generative AI model, which synthesizes a concise and personalized answer.
This ensures the bot delivers accurate and context-aware support, improving the customer experience.
Nvidia and RAG
Interesting NVIDIA blog post on Retrieval-Augmented Generation (RAG). NVIDIA emphasises how its AI frameworks, such as NeMo and Triton, facilitate the adoption of RAG. These platforms provide prebuilt tools to streamline deployment, particularly for enterprises aiming to leverage generative AI at scale.
https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation
What is Rag? Here is a quick definition:
Retrieval-Augmented Generation (RAG) combines generative models with information retrieval to craft highly specific and contextually accurate responses. It works by retrieving relevant documents or knowledge snippets from a database (if GraphRAG it uses Graph databases) to feed into a large language model (LLM), enhancing its output with real-time and precise information.
Comparing Infrastructure as Code: Pulumi vs. Terraform
In the ever-evolving landscape of DevOps and cloud engineering, Infrastructure as Code (IaC) has become an essential tool for automating and managing infrastructure deployments. Two of the most popular IaC tools are Pulumi and Terraform, each offering unique features and approaches to infrastructure management. In this post, we’ll delve into the differences between Pulumi and Terraform, complete with examples to help understand which tool might be right for your next project.
What is Terraform?
Terraform, developed by HashiCorp, is an open-source tool that allows you to define both cloud and on-premises resources using a declarative configuration language known as HashiCorp Configuration Language (HCL). Terraform follows an immutable infrastructure approach where any change leads to the replacement of the old infrastructure with a new one.
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
output "ip" {
value = aws_instance.example.public_ip
}
This simple Terraform configuration deploys an AWS EC2 instance and outputs its public IP.
What is Pulumi
Pulumi, on the other hand, is a newer player in the IaC space that lets you define infrastructure using general-purpose programming languages such as JavaScript, TypeScript, Python, Go, and .NET. This means you can use loops, functions, and other language-specific features to generate infrastructure configurations dynamically.
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
const size = "t2.micro";
const ami = aws.getAmi({
filters: [{ name: "name", values: ["ami-0c55b159cbfafe1f0"] }],
});
const server = new aws.ec2.Instance("web-server", {
instanceType: size,
ami: ami.id,
});
export const publicIp = server.publicIp;
This Pulumi script achieves the same goal as the Terraform example, but it uses TypeScript and can leverage asynchronous operations, error handling, and other advanced programming techniques.
Key Differences between Terraform and Pulumi
- Language Support:
- Terraform: Uses HCL, a domain-specific language.
- Pulumi: Uses popular programming languages like JavaScript, TypeScript, Python, etc.
- State Management:
- Terraform: Manages state files that track the state of your resources. These files can be stored locally or remotely, and managing these state files is crucial for Terraform to operate correctly.
- Pulumi: Uses a service-based state management approach by default, handling state files through the Pulumi Service, although it can be configured to use a local backend or cloud storage options like AWS S3.
- Community and Ecosystem:
- Terraform: Boasts a large community with a vast array of plugins and pre-built modules contributed by users.
- Pulumi: While growing, has a smaller community compared to Terraform. However, it offers the advantage of using existing language-specific packages and libraries.
- Learning Curve:
- Terraform: Requires learning HCL and the specific constructs of Terraform modules and providers.
- Pulumi: Leverages existing programming skills, which can lower the learning curve for software developers.
Conclusion
Choosing between Pulumi and Terraform often comes down to your team’s familiarity with programming languages and specific project requirements. If you prefer using standard programming languages and need to integrate IaC with application logic, Pulumi might be the better choice. However, if you are looking for a mature tool with a robust ecosystem and prefer a declarative approach to infrastructure as code, Terraform might be more suitable.
Both tools have their strengths and cater to different aspects of infrastructure automation. Whichever you choose, embracing IaC is a step forward in building efficient, reproducible, and scalable cloud infrastructure.
Understanding Exceptions and Errors in Go
Unlike Java or Python, which use exceptions for handling errors, Go uses a more explicit error-handling model. This post will delve deep into how errors are handled in Go, why the language designers chose this path, and how you can effectively manage errors in your Go programs.
Why Go dislikes Exceptions
The creators of Go decided to avoid exceptions for several reasons. Primarily, they aimed to create a language that encourages clear and predictable error handling. Exceptions can sometimes lead to complex control flows, which are hard to follow. They can be thrown at many points in a program and caught far away from the source of the problem, making the code harder to read and maintain.
In contrast, Go’s error handling model is designed to encourage developers to deal with errors immediately as soon as they occur. This proximity between error occurrence and handling aims to produce more robust and maintainable code.
The Go Way: Errors as Values
In Go, errors are considered values. The error type is a built-in interface similar to fmt.Stringer:
type error interface {
Error() string
}
An error variable represents any value that can describe itself as a string. Here is how you might typically see an error handled in Go:
func thisDoesSomething() error {
// Attempt an operation that can fail.
err := someOperation()
if err != nil {
return err
}
return nil
}
In this model, functions that can fail return an error as a normal return value, which is checked by the caller. This approach makes error handling a deliberate act: you have to explicitly check whether an error occurred.
Common Patterns for Error Handling in Go
Propagating Errors
When an error occurs, it is common practice in Go to return the error up to the caller of your function. This way, the caller can handle it appropriately, whether by logging the error, retrying the operation, or failing gracefully.
if err := doSomething(); err != nil {
fmt.Println("An error occurred:", err)
return err
}
Wrapping Errors
Go 1.13 introduced error wrapping, which allows you to add additional context to an error while preserving the original error. This is particularly useful when you want to maintain a stack trace or add descriptive messages without losing the original cause of the error:
if err := doSomething(); err != nil {
return fmt.Errorf("doSomething failed: %w", err)
}
Custom Error Types
Sometimes, you may need more control over error handling. In such cases, you can define custom error types. This is particularly useful when you want to distinguish between different error conditions programmatically.
type MyError struct {
Msg string
Code int
}
func (e *MyError) Error() string {
return fmt.Sprintf("code %d: %s", e.Code, e.Msg)
}
func doSomething() error {
return &MyError{"Something bad happened", 1234}
}
Best Practices handling Errors in Go
- Handle errors where they make sense: Don’t just propagate errors for the sake of it. Sometimes it’s better to handle the error directly where it occurs.
- Be explicit with error conditions: It’s better to check for specific error conditions than to rely on general error handling.
- Avoid panics for common errors: Use panics only for truly unexpected conditions that should terminate normal operation of your program. Common errors should be represented as normal error values.
- Document error conditions: When writing functions that return errors, document what those errors will be, and under what conditions they’ll be returned.
The elegance of Scala’s `implicit` Keyword
I have been digging into some Actor modelling in Akka and I came across the usage of the keyword implicit in Scala. Scala, a hybrid functional and object-oriented programming language, is known for its concise syntax and powerful features. Among its arsenal of features, the implicit keyword stands out for its ability to reduce boilerplate and enhance the expressiveness of code. However, for newcomers and even some seasoned programmers, implicit can be shrouded in mystery.
What is really implicit?
In Scala, implicit is a keyword that can be applied to variables, functions, and parameters. It serves 3 primary purposes:
- Implicit Conversions: Automatically convert one type to another.
- Implicit Parameters: Automatically pass parameters to a function.
- Implicit Classes: Enrich existing classes with new functionality without modifying their source code.
Let’s talk about Implicit Conversions
Imagine you asre working with two different types in your application, A and B, and you frequently need to convert from A to B. Instead of manually converting them every time, Scala’s implicit conversions can do the heavy lifting for you. Here is an example:
case class A(value: Int)
case class B(value: Int)
implicit def aToB(a: A): B = B(a.value)
val aInstance = A(5)
val bInstance: B = aInstance // Automatically converted from A to B
In the above snippet, the Scala compiler expects an instance of B but receives an A, and automatically uses the function aToB to apply the conversion
Simplifying Code with Implicit Parameters
Implicit parameters can significantly reduce the verbosity of your code, especially when passing common parameters like configurations or context objects through multiple layers of functions.
implicit val timeout: Int = 5000 // 5 seconds
def fetchData(query: String)(implicit timeout: Int): Unit = {
println(s"Fetching data for '$query' with timeout: $timeout")
}
fetchData("Scala posts") // No need to explicitly pass the timeout
In the above example, fecthData can be called without explicitly providing the timeout, so that the function call is cleaner and more readeable
Enhancing Classes with Implicit Classes
Scala allows adding new methods to existing classes using implicit classes, a technique often referred to as “pimp my library”. This is particularly useful for adding utility methods to third-party classes or built-in types.
implicit class RichString(val s: String) {
def shout: String = s.toUpperCase + “!”
}
println(“hello”.shout) // Outputs: HELLO!
In this way, RichString implicit class adds a new shout method to the String class, allowing all strings to “shout”.
Let’s talk about the Context Package – Golang
In the world of Go programming, dealing with concurrent operations is part of the daily routine. As applications grow in complexity, the need to manage and cancel these operations becomes critical. Enter the context package: Go’s solution to managing multiple requests, deadlines, and cancellation signals across API boundaries. This blog post delves into the context package, providing you with the understanding and examples you need to leverage its power in your Go applications.
What is the Context Package
Introduced in Go 1.7, the context package is designed to enable request-scoped values, cancellation signals, and deadlines across API boundaries and between processes. It is particularly useful in applications involving Networking, Infrastructure Components, and Microservices.
Key Concepts
- Cancellation: The ability to signal that an operation should be stopped.
- Deadlines: Setting a time limit on how long an operation should take.
- Values: Storing and retrieving request-scoped data.
Using Context
A context.Context is created for each request by the main function or the middleware of the server. This context is passed down the call chain as a parameter to every function that needs it.
Creating Contexts
The root of any context tree is created with context.Background() or context.TODO(). From there, contexts with deadlines, timeouts, or cancellation signals are derived.
ctx := context.Background()
This context is typically used in main functions, initialization, and tests. It is never canceled, has no values, and has no deadline.
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel() // Important to avoid leaking resources
// Use ctx in operations
This creates a context that automatically cancels after 10 seconds.
Passing Contexts
Contexts are passed as the first parameter of a function. This is a convention in Go programming.
func doSomething(ctx context.Context) error {
// Function implementation
}
Using Contexts for Cancellation
One of the primary uses of context is to cancel long-running operations. This is crucial for freeing up resources and stopping operations that are no longer needed.
select {
case <-ctx.Done():
return ctx.Err()
default:
// proceed with normal operation
}
In this example, we listen for the cancellation signal. If ctx.Done() is closed, we return the cancellation error, effectively stopping the operation.
A practical Example: HTTP Server with Context
Let’s put all this together in a practical example. Imagine we’re building an HTTP server where each request might involve a long-running operation, like querying a database.
package main
import (
"context"
"net/http"
"time"
)
func longRunningOperation(ctx context.Context) (string, error) {
// Simulate a long-running operation
select {
case <-time.After(5 * time.Second):
return "operation result", nil
case <-ctx.Done():
return "", ctx.Err()
}
}
func handler(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
result, err := longRunningOperation(ctx)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.Write([]byte(result))
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe(":8080", nil)
}
In this example, longRunningOperation listens for a cancellation signal from the context. If the operation takes too long, or if the client disconnects, the operation is cancelled, conserving resources.
Simplifying Kubernetes Health Checks with BusyBox on Minikube
In the realm of microservices and cloud-native applications, efficiency and minimalism are crucial. This is where BusyBox shines, bringing together the most commonly used UNIX utilities into a single lightweight executable. Ideal for environments where resources are scarce or need to be optimized, such as Docker containers or embedded systems, BusyBox ensures that developers and administrators have access to essential tools without the overhead of a full OS.
Deploying BusyBox in Minikube
To deploy BusyBox, we can simply create a YAML file named busybox.yaml with the following configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
namespace: default
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox-container
image: busybox:latest
# Keep the container running
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
resources:
requests:
cpu: 30m
memory: 64Mi
limits:
cpu: 100m
memory: 128Mi
Then we can run the following command to apply the .yaml configurations:
kubectl apply -f busybox.yaml
Creating a Simple Health Check
Access the BusyBox shell and create the index.html file, as follows:
kubectl exec -it busybox -- bin/sh
echo "Application is healthy" > /www/index.html
Using BusyBox for Health Checks
Provided that there is an existing application running in the minikube cluster and we run the following command holding its application IP – note that the namespace of reference is called development:
kubectl get pods -n development -o wide
Let’s say that we have the quote service running as per the following:
quote-deployment-7494cbf559-h8rcd 1/1 Running 0 8m2s 10.244.0.9 minikube
Now, we can connect with the application:
wget 10.244.0.9:8080
and inspect the content of the index.html:
Connecting to 10.244.0.9:8080 (10.244.0.9:8080)
saving to 'index.html'
index.html 100% |********************************************************************| 138 0:00:00 ETA
'index.html' saved
/ # cat index.html
{
"server": "itchy-blueberry-ph0yl0ly",
"quote": "668: The Neighbor of the Beast.",
"time": "2024-03-25T16:06:54.802383732Z"
}/ #
A walk in the park with gRPC, HTTP/2, and gRPC Gateway
In today’s microservices-driven world, efficient, robust, and scalable communication between services is paramount. This is where technologies like gRPC, built atop the advanced features of HTTP/2, and the gRPC Gateway come into play, offering powerful tools for developers to build interconnected systems. This post delves into the intricacies of these technologies, illustrating their architecture, advantages, and real-world applications.
What is gRPC?
gRPC is a high-performance, open-source universal RPC (Remote Procedure Call) framework initially developed by Google. It allows servers and clients, possibly written in different programming languages, to transparently communicate and execute functions on each other, much like local function calls. gRPC leverages HTTP/2 for transport, Protocol Buffers (proto) as the interface description language, and provides features such as authentication, load balancing, and more.
Core Features of gRPC
- Interface Definition Language (IDL): gRPC uses Protocol Buffers to define services and messages, providing a language-agnostic way to specify service APIs.
- Streaming Capabilities: Supports four types of streaming – Unary, Server streaming, Client streaming, and Bidirectional streaming.
- Pluggable Authentication: Offers support for various authentication mechanisms, including SSL/TLS and token-based authentication.
- Efficient Serialization: Protocol Buffers, being binary, ensure efficient serialization and deserialization, leading to reduced network usage.
Understanding HTTP/2
HTTP/2 is the second major version of the HTTP network protocol, used by the World Wide Web. It brings significant performance improvements over HTTP/1.x such as:
- Binary Framing Layer: This makes HTTP/2 more efficient and simpler to parse.
- Multiplexing: Multiple requests and responses can be in flight at the same time over a single connection, reducing latency.
- Server Push: Servers can push resources proactively to the client, improving page load times.
- Header Compression: HTTP/2 uses HPACK compression, reducing overhead.
These features make HTTP/2 particularly suitable for modern web applications requiring high performance and efficient use of resources.
gRPC Architecture
gRPC clients and servers exchange messages using a defined service method, which can be unary or streaming. The architecture revolves around the following key components:
- Protocol Buffers (Protobuf): Used for defining service methods and message schemas.
- gRPC Server: Implements the service interface and listens for client calls.
- gRPC Client: Consumes the server’s services by making RPC calls.
- HTTP/2: Underlying protocol used for transport, taking advantage of its multiplexing and binary framing features.
A sample gRPC Service Definition in Protobuf
syntax = "proto3";
package example;
service Greeter {
unary SayHello (HelloRequest) returns (HelloReply) {}
}
message HelloRequest {
string name = 1;
}
message HelloReply {
string message = 1;
}
what is gRPC Gateway?
The gRPC Gateway is a plugin of the Google gRPC framework that allows a RESTful JSON API to provide services defined by gRPC, translating RESTful HTTP/JSON requests into gRPC and vice versa. This enables developers to provide their APIs in both gRPC (efficient and type-safe) and RESTful style (easily consumable by web clients).
What is the purpose of gRPC Gateway?
The motivations behind gRPC Gateway and its use are as follows:
- Compatibility: Allows existing RESTful clients to consume gRPC services.
- Efficiency: Leverages gRPC for internal communications where efficiency and performance are critical.
- Simplicity: Provides a simpler way for web clients to interact with gRPC services without the need for a gRPC client.
Compatibility
One of the primary challenges in adopting new technologies like gRPC in existing systems is ensuring that they work seamlessly with the current infrastructure. Many applications and services are built on RESTful APIs, widely adopted due to their simplicity and the ubiquitous support in various programming languages and tools. The gRPC Gateway allows these existing RESTful clients to consume gRPC services without any major changes on their part. This means that developers can upgrade the backend to use gRPC’s efficient communication protocols while keeping the client-side codebase unchanged, ensuring a smooth transition and backward compatibility.
For example, a mobile application that interacts with a backend service using REST. If the backend service migrates to gRPC to improve performance and scalability, the mobile application can continue to operate without any changes by interacting with the gRPC Gateway, which translates RESTful HTTP requests into gRPC calls.
Efficiency
gRPC is designed for low latency and high throughput communication, making it ideal for microservices and other distributed systems where performance is crucial. By using gRPC for internal communications between services, applications can achieve significant performance improvements. The gRPC Gateway enables this efficient communication model to be extended to clients not natively supporting gRPC by translating between the HTTP/JSON and the more efficient gRPC protocols.
For example, in a microservices architecture, different services might need to communicate with each other frequently. By using gRPC for these internal communications, the services can exchange messages quickly and efficiently. External clients, such as web browsers that typically communicate via HTTP/JSON, can still interact with these services through the gRPC Gateway, ensuring that the system benefits from gRPC’s efficiency without limiting client compatibility.
Simplicity
While gRPC offers numerous advantages, its adoption on the client-side, especially in web environments, can be hindered by the need for specific client libraries and the complexity of setting up gRPC clients in environments that traditionally rely on HTTP/JSON. The gRPC Gateway simplifies this by allowing clients to interact with gRPC services using familiar HTTP/JSON, reducing the learning curve and development effort required to integrate with gRPC services.
For example, a web application needs to fetch data from a backend service. Using the gRPC Gateway, the frontend developers can use standard HTTP requests with JSON, a process they are already familiar with, to interact with the backend. This simplicity accelerates development and reduces the potential for errors, as developers don’t need to learn a new protocol or integrate new libraries to communicate with the backend.
Understanding Method Sets in Go
Go is a statically typed, compiled programming language designed for simplicity and efficiency. One of its core concepts that every Go developer must grasp is the idea of method sets. This concept is pivotal in understanding how methods are attached to types and how they affect interface implementation. In this blog post, we’ll dive deep into method sets in Go, providing clear examples to illuminate their workings and implications.
What are Method Sets?
In Go, a method is a function that executes in the context of a type. A method set, on the other hand, is the collection of all the methods with a receiver of a particular type. The method set determines the interfaces that the type can implement and how the methods can be called.
Go differentiates between two types of receivers: value receivers and pointer receivers. This distinction plays a crucial role in method sets:
- Value receivers operate on copies of the original value. They can be called on both values and pointers of that type.
- Pointer receivers operate on the actual value (not a copy) and can only be called on pointers.
This differentiation leads to an important rule in Go’s method sets:
Examples of Method Sets
To illustrate the concept of method sets, let’s consider some examples.
- The method set of a type T consists of all methods declared with receiver type T.
- The method set of the pointer type *T includes all methods declared with receiver *T or T.
- This rule has a significant impact on interface implementation, as we will see later.
Example 1: Value Receiver
package main
import "fmt"
type Circle struct {
Radius float64
}
// Area method has a value receiver of type Circle
func (c Circle) Area() float64 {
return 3.14 * c.Radius * c.Radius
}
func main() {
c := Circle{Radius: 5}
fmt.Println("Area:", c.Area())
cPtr := &c
// Even though Area has a value receiver, it can be called on a pointer.
fmt.Println("Area through pointer:", cPtr.Area())
}
In this example, Area has a value receiver of type Circle. Hence, it can be called on both a Circle value and a pointer to Circle.
Example 2: Pointer Receiver
package main
import "fmt"
type Square struct {
Side float64
}
// Scale method has a pointer receiver of type *Square
func (s *Square) Scale(factor float64) {
s.Side *= factor
}
func main() {
sq := Square{Side: 4}
// Scale method can be called on a pointer
sqPtr := &sq
sqPtr.Scale(2)
fmt.Println("Scaled side:", sq.Side)
// Scale method can also be called on a value, the compiler implicitly takes the address
sq.Scale(2)
fmt.Println("Scaled side again:", sq.Side)
}
In this case, Scale has a pointer receiver. It can be called on both a Square value and a pointer to Square, with the compiler implicitly taking the address of sq when calling sq.Scale(2).
Interface Implementation and Method Sets
Method sets are crucial when it comes to interface implementation. A type T implements an interface by implementing its methods. However, a pointer type *T can also implement an interface by having methods with either receiver type.
Consider the following interface:
type Shaper interface {
Area() float64
}
For a type T to implement Shaper, it must have an Area method with a value receiver. However, if the Area method had a pointer receiver, then only *T (a pointer to T) would satisfy the Shaper interface.
Conclusion
Understanding method sets in Go is fundamental for effective Go programming, particularly when working with interfaces and method receivers. Remember that the method set of a type determines how methods can be called and what interfaces the type implements.
You must be logged in to post a comment.