Skip to content

📖 Add designs/multi-cluster.md #2746

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
281 changes: 281 additions & 0 deletions designs/multi-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,281 @@
# Multi-Cluster Support
Author: @sttts
Initial implementation: @vincepri

Last Updated on: 03/26/2024

## Table of Contents

<!--ts-->
- [Multi-Cluster Support](#multi-cluster-support)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Examples](#examples)
- [Non-Goals/Future Work](#non-goalsfuture-work)
- [Proposal](#proposal)
- [Multi-Cluster-Compatible Reconcilers](#multi-cluster-compatible-reconcilers)
- [User Stories](#user-stories)
- [Controller Author with no interest in multi-cluster wanting to old behaviour.](#controller-author-with-no-interest-in-multi-cluster-wanting-to-old-behaviour)
- [Multi-Cluster Integrator wanting to support cluster managers like Cluster-API or kind](#multi-cluster-integrator-wanting-to-support-cluster-managers-like-cluster-api-or-kind)
- [Multi-Cluster Integrator wanting to support apiservers with logical cluster (like kcp)](#multi-cluster-integrator-wanting-to-support-apiservers-with-logical-cluster-like-kcp)
- [Controller Author without self-interest in multi-cluster, but open for adoption in multi-cluster setups](#controller-author-without-self-interest-in-multi-cluster-but-open-for-adoption-in-multi-cluster-setups)
- [Controller Author who wants to support certain multi-cluster setups](#controller-author-who-wants-to-support-certain-multi-cluster-setups)
- [Risks and Mitigations](#risks-and-mitigations)
- [Alternatives](#alternatives)
- [Implementation History](#implementation-history)

<!--te-->

## Summary

Controller-runtime today allows to write controllers against one cluster only.
Multi-cluster use-cases require the creation of multiple managers and/or cluster
objects. This proposal is about adding native support for multi-cluster use-cases
to controller-runtime.

## Motivation

This change is important because:
- multi-cluster use-cases are becoming more and more common, compare projects
like Kamarda, Crossplane or kcp. They all need to write (controller-runtime)
controllers that operate on multiple clusters.
- writing controllers for upper systems in a **portable** way is hard today.
Consequently, there is no multi-cluster controller ecosystem, but could and
should be.
- kcp maintains a [controller-runtime fork with multi-cluster support](https://github.com/kcp-dev/controller-runtime)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On a general note, for the purpose of this proposal we should focus on general controller runtime users. While we can keep kcp as a reference along other implementation. I'd rephrase the motivation at a high level "setup controllers and watches across multiple Kubernetes clusters in a transparent way"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we tried to cover the high-level part with the first bullet point. The kcp controller-runtime fork is just mentioned to give personal motivation, but I don't think we have to mention it here if that is preferred.

because adding support on-top leads to an inefficient controller design, and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leads to an inefficient controller design

Could you elaborate on this point?

Copy link
Author

@sttts sttts Feb 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We explicitly don't want one controller (with its own workqueue) per cluster.

Example: forget about workspaces. Imagine controller-runtime only supported controllers per one (!) namespace, i.e. another controller with another namespace for every namespace you want to serve. Same argument here, just a level higher.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And independently you could imagine cases where the same is true e.g. for cluster-api cases where the workqueue should be shared. That's what this enhancement is enabling.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a decision that shouldn't be forced onto customers. I can see the case where a workqueue per cluster is desired as it provides some built in fairness

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We explicitly don't want one controller (with its own workqueue) per cluster.

Others might disagree. This question needs a proper evaluation with pro/cons of both approaches rather than jumping to conclusions.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that both topologies can have their place. Am not even sure pro/con is helpful. We shouldn't be opinionated, but give the primitives for the developer to decide.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copying my comment from here since this is the bigger thread:

Would you be comfortable with a way to tell this design to either use a shared queue (e.g. start sources) or to start controllers with a config switch [in the manager] or similar?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that both topologies can have their place. Am not even sure pro/con is helpful. We shouldn't be opinionated, but give the primitives for the developer to decide.

A stated goal of this doc is to avoid divergence in the ecosystem, writing that down while at the same time handwaving away comments about this not actually being a good approach and saying we shouldn't be opinionated is not particularly convincing.

Our goal is be make the majority use case simple and other use-cases possible. This is not possible if we refuse to even look into the question what the majority use-case is and default to assuming that the use-case of the author of a design must be the majority use-case.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. I didn't mean we shouldn't think about which workqueue topology is useful when. I meant that there are good reasons for a joint workqueue in some situations (like when I want to throttle all reconciles in a process because that throughput it limited), and independent ones in other situations (like when e.g. writes to a cluster are the limited factor).

I played with a TypedFair queue:

// Fair is a queue that ensures items are dequeued fairly across different
// fairness keys while maintaining FIFO order within each key.
type Fair TypedFair[any]

// FairnessKeyFunc is a function that returns a string key for a given item.
// Items with different keys are dequeued fairly.
type FairnessKeyFunc[T comparable] func(T) string

// NewFair creates a new Fair instance.
func NewFair(keyFunc FairnessKeyFunc[any]) *Fair {
	return (*Fair)(NewTypedFair[any](keyFunc))
}

that could be plugged in here, wrapped by throttling and delays.

even more important leads of divergence in the ecosystem.

### Goals

- Provide a way to natively write controllers that
1. (UNIFORM MULTI-CLUSTER CONTROLLER) operate on multiple clusters in a uniform way,
i.e. reconciling the same resources on multiple clusters, **optionally**
- sourcing information from one central hub cluster
- sourcing information cross-cluster.

Example: distributed `ReplicaSet` controller, reconciling `ReplicaSets` on multiple clusters.
2. (AGGREGATING MULTI-CLUSTER CONTROLLER) operate on one central hub cluster aggregating information from multiple clusters.

Example: distributed `Deployment` controller, aggregating `ReplicaSets` back into the `Deployment` object.
- Allow clusters to dynamically join and leave the set of clusters a controller operates on.
- Allow event sources to be cross-cluster:
1. Multi-cluster events that trigger reconciliation in the one central hub cluster.
2. Central hub cluster events to trigger reconciliation on multiple clusters.
- Allow (informer) indexes that span multiple clusters.
- Allow logical clusters where a set of clusters is actually backed by one physical informer store.
- Allow 3rd-parties to plug in their multi-cluster adapter (in source code) into
an existing multi-cluster-compatible code-base.
- Minimize the amount of changes to make a controller-runtime controller
multi-cluster-compatible, in a way that 3rd-party projects have no reason to
object these kind of changes.

Here we call a controller to be multi-cluster-compatible if the reconcilers get
reconcile requests in cluster `X` and do all reconciliation in cluster `X`. This
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So "Start/Stop a controller for each cluster" is out of scope, this is purely about "Add/Remove sources to/from controller on cluster arrival/departure"?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to "one workqueue" I guess. Start/Stop means another workqueue, which we don't want.

is less than being multi-cluster-aware, where reconcilers implement cross-cluster
logic.

### Examples

- Run a controller-runtime controller against a kubeconfig with arbitrary many contexts, all being reconciled.
- Run a controller-runtime controller against cluster-managers like kind, Cluster-API, Open-Cluster-Manager or Hypershift.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given the cluster-api example here, is the intention that controllers will be able to reconcile CRDs in clusters that they know about that may only exist in a subset of clusters (e.g. Machine objects in the management cluster but not in the workload cluster) ?

Copy link
Member

@sbueringer sbueringer Apr 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I think that has to be possible. Otherwise we need all resources that we watch in all clusters

(especially good point because today a controller crashes if a resource doesn't exist)

EDIT: Further down:

For example, it can well be that every cluster has different REST mapping because installed CRDs are different. Without a context, we cannot return the right REST mapper.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Question is whether one would rather group them in managers such that every manager has a uniform set of clusters.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my updated PR #2726. You can now opt into provider and/or the default cluster per controller via options:

	// EngageWithDefaultCluster indicates whether the controller should engage
	// with the default cluster of a manager. This defaults to false through the
	// global controller options of the manager if a cluster provider is set,
	// and to true otherwise. Here it can be overridden.
	EngageWithDefaultCluster *bool
	// EngageWithProvidedClusters indicates whether the controller should engage
	// with the provided clusters of a manager. This defaults to true through the
	// global controller options of the manager if a cluster provider is set,
	// and to false otherwise. Here it can be overridden.
	EngageWithProviderClusters *bool

There is no logic yet for a controller to decide whether to engage with a provider cluster or not. Now it's with all of them. If the setup is more diverse, we might want such a functionality, e.g. some kind of pre-check: ctrl.WantsToEngage(ctx, cluster) bool`.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm still understanding the changes in #2726, but i think what you are saying here makes sense to me and would solve the issue.

some kind of pre-check: ctrl.WantsToEngage(ctx, cluster) bool`.

+1, i think we definitely need some way for the client user to specify when it should check a specific cluster for a resource.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I somehow think it should be the author's and managers responsibility (for now) to group them into groups which are working with the pattern. At this point, we don't know what we don't know. Once this is released, we can gather some feedback on edge cases and take it from there. I suspect the majority of use cases will be still single cluster reconcile loops.

Maybe document this edge case and mark this feature overall as experimental? This way we not committing to full production level stability, and allow to gather more feedback?

- Run a controller-runtime controller against a kcp shard with a wildcard watch.

### Non-Goals/Future Work

- Ship integration for different multi-cluster setups. This should become
out-of-tree subprojects that can individually evolve and vendor'ed by controller authors.
- Make controller-runtime controllers "binary pluggable".
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does "binary pluggable" mean in this context?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like https://pkg.go.dev/plugin to dynamically load providers.

- Manage one manager per cluster.
- Manage one controller per cluster with dedicated workqueues.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Manage one controller per cluster with dedicated workqueues.

This should be a goal


## Proposal

The `ctrl.Manager` _SHOULD_ be extended to get an optional `cluster.Provider` via
`ctrl.Options` implementing

```golang
// pkg/cluster
type Provider interface {
Get(ctx context.Context, clusterName string, opts ...Option) (Cluster, error)
List(ctx context.Context) ([]string, error)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why return []string here rather than []Cluster?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 Would be good for consistency with the Get func

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a misunderstanding of the interface. The getter is actually the constructor. The life-cycle of the returned clusters is owned by the manager (they are added as runnables). Hence, the List returns names, not clusters. We should rather rename Get to Create or Connect.

Watch(ctx context.Context) (Watcher, error)
}
```
The `cluster.Cluster` _SHOULD_ be extended with a unique name identifier:
```golang
// pkg/cluster:
type Cluster interface {
Name() string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, this Name() shoudl be converted to something like SelfRef or similar

...
}
```

The `ctrl.Manager` will use the provider to watch clusters coming and going, and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll have to think about if and how this is doable, but ideally the "thing that comes and goes" wouldn't be typed to cluster.Cluster but can be anything, so this mechanism can also be used if folks have sources that are not kube watches

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be mostly about a more generic name? (can't think of much that would work, maybe something like scope)

will inform runnables implementing the `cluster.AwareRunnable` interface:

```golang
// pkg/cluster
type AwareRunnable interface {
Engage(context.Context, Cluster) error
Disengage(context.Context, Cluster) error
}
```
In particular, controllers implement the `AwareRunnable` interface. They react
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than changing the controller type directly and requiring all its dependencies to known how to deepcopy themselves, how about having something like a controllerconstructor (name tbd) in between that is filled with a []watchConstructor{source func(Cluster) source.Source, handler func(Cluster) handler.Handler, predicate func(cluster) []predicate.Predicate}?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would require more invasive changes to our public API (the Controller interface)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, you can call Watch on an existing controller. The idea is to not let the Controller or its dependencies have any knowledge about this but instead have a thing on top of the Controller that is configured with constructors that take a cluster.Cluster and return a source/predicate/handler and then uses those to call Watch when a new cluster appears.

When one disappears, it would cancel the context on the Source.

The idea really is the opposite, I do not want the Controller to know how to extend itself like this, IMHO this is a higher-level abstraction.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compare #2726 after latest push. I have implemented @alvaroaleman's idea via a MultiClusterController wrapper implementing cluster.AwareRunnable and just calling Watch on the actual controller. All the deepcopy'ing is gone 🎉 Much nicer IMO. @alvaroaleman great intuition!

to engaged clusters by duplicating and starting their registered `source.Source`s
and `handler.EventHandler`s for each cluster through implementation of
```golang
// pkg/source
type DeepCopyableSyncingSource interface {
SyncingSource
DeepCopyFor(cluster cluster.Cluster) DeepCopyableSyncingSource
}

// pkg/handler
type DeepCopyableEventHandler interface {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The eventhandlers are stateless, why do we need the deepcopy for them?

Copy link
Member

@sbueringer sbueringer Apr 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the propotype. I think this is because EventHandler then would store the Cluster (it is using that info to set the ClusterName field in the request)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is gone now in #2726.

Will update the design here.

Copy link
Member

@embik embik Dec 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the BYO request/eventhandler changes in #3019, I brought this back after remembering it was mentioned in the proposal. The previous version of my prototype had a weird second layer of event handler that was wrapping the actual event wrapper and was using the event object to communicate the cluster name in. That felt all kinds of weird.

Because we now have BYO EventHandlers, it's possible that they are not entirely stateless (as @sbueringer pointed out, some event handlers might have to store the cluster name in absence of any information on the event object itself). So I think this approach is the most clean, to be honest. It's entirely optional in #3019, existing EventHandlers don't need to be changed.

EventHandler
DeepCopyFor(c cluster.Cluster) DeepCopyableEventHandler
}
```
The standard implementing types, in particular `internal.Kind` will adhere to
these interfaces.

The `ctrl.Manager` _SHOULD_ be extended by a `cluster.Cluster` getter:
```golang
// pkg/manager
type Manager interface {
// ...
GetCluster(ctx context.Context, clusterName string) (cluster.Cluster, error)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
GetCluster(ctx context.Context, clusterName string) (cluster.Cluster, error)
GetCluster(ctx context.Context, ref cluster.Reference) (cluster.Cluster, error)

In line with the other comments above

}
```
The embedded `cluster.Cluster` corresponds to `GetCluster(ctx, "")`. We call the
clusters with non-empty name "provider clusters" or "enganged clusters", while
the embedded cluster of the manager is called the "default cluster" or "hub
cluster".
Comment on lines +196 to +199
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's go by being explicit here, this was one of the main issues is that an empty string is also a default value. We can set a cluster.Reference to a specific value that's very specific, which in turns is used across the entire codebase


The `reconcile.Request` _SHOULD_ be extended by an optional `ClusterName` field:
```golang
// pkg/reconile
type Request struct {
ClusterName string
types.NamespacedName
}
```

With these changes, the behaviour of controller-runtime without a set cluster
provider will be unchanged.

### Multi-Cluster-Compatible Reconcilers

Reconcilers can be made multi-cluster-compatible by changing client and cache
accessing code from directly accessing `mgr.GetClient()` and `mgr.GetCache()` to
going through `mgr.GetCluster(req.ClusterName).GetClient()` and
`mgr.GetCluster(req.ClusterName).GetCache()`.

When building a controller like
```golang
builder.NewControllerManagedBy(mgr).
For(&appsv1.ReplicaSet{}).
Owns(&v1.Pod{}).
Complete(reconciler)
```
with the described change to use `GetCluster(ctx, req.ClusterName)` will automatically
act as *uniform multi-cluster controller*. It will reconcile resources from cluster `X`
in cluster `X`.

For a manager with `cluster.Provider`, the builder _SHOULD_ create a controller
that sources events **ONLY** from the provider clusters that got engaged with
the controller.
Comment on lines +292 to +294
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that the "default" cluster we won't get events for?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be configured by controller. We default to writing a uniform multi-cluster controller, i.e. one that only reacts to provider clusters. It is not common afaik to have the same semantics for both a local cluster (the hub usually) and provider clusters.


Controllers that should be triggered by events on the hub cluster will have to
opt-in like in this example:

```golang
builder.NewControllerManagedBy(mgr).
For(&appsv1.Deployment{}, builder.InDefaultCluster).
Owns(&v1.ReplicaSet{}).
Complete(reconciler)
```
A mixed set of sources is possible as shown here in the example.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doc is completely missing info on:

  • The actual implementation of a multi-cluster controller (i.E. Engage/Disengage in the Controller) - We are not expecting users to do that, right?
  • The same for source.Source, but arguably a subtask of the above

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See implementation proposal #3019.

## User Stories

### Controller Author with no interest in multi-cluster wanting to old behaviour.

- Do nothing. Controller-runtime behaviour is unchanged.

### Multi-Cluster Integrator wanting to support cluster managers like Cluster-API or kind

- Implement the `cluster.Provider` interface, either via polling of the cluster registry
or by watching objects in the hub cluster.
- For every new cluster create an instance of `cluster.Cluster`.

### Multi-Cluster Integrator wanting to support apiservers with logical cluster (like kcp)

- Implement the `cluster.Provider` interface by watching the apiserver for logical cluster objects
(`LogicalCluster` CRD in kcp).
- Return a facade `cluster.Cluster` that scopes all operations (client, cache, indexers)
to the logical cluster, but backed by one physical `cluster.Cluster` resource.
- Add cross-cluster indexers to the physical `cluster.Cluster` object.

### Controller Author without self-interest in multi-cluster, but open for adoption in multi-cluster setups

- Replace `mgr.GetClient()` and `mgr.GetCache` with `mgr.GetCluster(req.ClusterName).GetClient()` and `mgr.GetCluster(req.ClusterName).GetCache()`.
- Make manager and controller plumbing vendor'able to allow plugging in multi-cluster provider.

### Controller Author who wants to support certain multi-cluster setups

- Do the `GetCluster` plumbing as described above.
- Vendor 3rd-party multi-cluster providers and wire them up in `main.go`

## Risks and Mitigations

- The standard behaviour of controller-runtime is unchanged for single-cluster controllers.
- The activation of the multi-cluster mode is through attaching the `cluster.Provider` to the manager.
To make it clear that the semantics are experimental, we make the `Options.provider` field private
and adds `Options.WithExperimentalClusterProvider` method.
- We only extend these interfaces and structs:
- `ctrl.Manager` with `GetCluster(ctx, clusterName string) (cluster.Cluster, error)`
- `cluster.Cluster` with `Name() string`
- `reconcile.Request` with `ClusterName string`
We think that the behaviour of these extensions is well understood and hence low risk.
Everything else behind the scenes is an implementation detail that can be changed
at any time.

## Alternatives

- Multi-cluster support could be built outside of core controller-runtime. This would
lead likely to a design with one manager per cluster. This has a number of problems:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No - This is precicesly why pkg/cluster exists

Copy link
Member

@sbueringer sbueringer Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious about details why it is not possible to implement this outside of CR. (I got the points around adoption across the ecosystem, just wondering about the technical reasons)

We have implemented a component (called ClusterCache) in Cluster API that seems to come very close to what this design is trying to achieve (apart from that it is Cluster API specific of course). Especially since the generic support was added to CR.

Basically ClusterCache in CAPI:

  • discovers Clusters
  • maintains a Cache per Cluster
  • allows retrieving Clients for a Cluster
  • allows adding Watches (kind Sources) for a Cluster
    • this also allows mapping events that we get from these sources back to the one controller with the one shared work queue

xref: https://github.com/kubernetes-sigs/cluster-api/tree/main/controllers/clustercache

P.S. we are not creating multiple Cluster objects, instead we have our own simplified version that only contains what we need (https://github.com/kubernetes-sigs/cluster-api/blob/main/controllers/clustercache/cluster_accessor.go#L85-L89)

P.S.2. I don't understand the last two points in this list

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main "blocker" for this is that the builder is pretty common in 3rdparty controller code. If we do all of this outside of CR, this will very likely mean a fork of pkg/builder. I think everything else in the implementation PR could be done outside of CR.

- only one manager can serve webhooks or metrics
- cluster management must be custom built
- logical cluster support would still require a fork of controller-runtime and
with that a divergence in the ecosystem. The reason is that logical clusters
require a shared workqueue because they share the same apiserver. So for
fair queueing, this needs deep integration into one manager.
- informers facades are not supported in today's cluster/cache implementation.
- We could deepcopy the builder instead of the sources and handlers. This would
lead to one controller and one workqueue per cluster. For the reason outlined
in the previous alternative, this is not desireable.
- We could skip adding `ClusterName` to `reconcile.Request` and instead pass the
cluster through in the context. On the one hand, this looks attractive as it
would avoid having to touch reconcilers at all to make them multi-cluster-compatible.
On the other hand, with `cluster.Cluster` embedded into `manager.Manager`, not
every method of `cluster.Cluster` carries a context. So virtualizing the cluster
in the manager leads to contradictions in the semantics.

For example, it can well be that every cluster has different REST mapping because
installed CRDs are different. Without a context, we cannot return the right
REST mapper.

An alternative would be to add a context to every method of `cluster.Cluster`,
which is a much bigger and uglier change than what is proposed here.


## Implementation History

- [PR #2207 by @vincepri : WIP: ✨ Cluster Provider and cluster-aware controllers](https://github.com/kubernetes-sigs/controller-runtime/pull/2207) – with extensive review
- [PR #2208 by @sttts replace #2207: WIP: ✨ Cluster Provider and cluster-aware controllers](https://github.com/kubernetes-sigs/controller-runtime/pull/2726) –
picking up #2207, addressing lots of comments and extending the approach to what kcp needs, with a `fleet-namespace` example that demonstrates a similar setup as kcp with real logical clusters.
- [github.com/kcp-dev/controller-runtime](https://github.com/kcp-dev/controller-runtime) – the kcp controller-runtime fork