Skip to content

Adding GEP-3539: Gateway API to Expose Pods on Cluster-Internal IP Address (ClusterIP Gateway) #3608

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

ptrivedi
Copy link

@ptrivedi ptrivedi commented Feb 10, 2025

Recommend reviewing deploy preview so examples are inlined: https://deploy-preview-3608--kubernetes-sigs-gateway-api.netlify.app/geps/gep-3539/

Signed-off-by: Pooja Trivedi [email protected]

What type of PR is this?

/kind gep

What this PR does / why we need it:

This defines via documentation how Gateway API can be used to accomplish ClusterIP Service behavior. It also proposes DNS record format for ClusterIP Gateway, proposes an EndpointSelector resource, and briefly touches upon Gateway API usage to define LoadBalancer and NodePort behaviors.

Which issue(s) this PR fixes:

Fixes #3539

Does this PR introduce a user-facing change?:

NONE

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/gep PRs related to Gateway Enhancement Proposal(GEP) labels Feb 10, 2025
Copy link

linux-foundation-easycla bot commented Feb 10, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Feb 10, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @ptrivedi. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Feb 10, 2025
@ptrivedi ptrivedi force-pushed the gep-clusterip-gateway branch from afc6467 to 835e6a3 Compare February 10, 2025 21:30
@ptrivedi ptrivedi force-pushed the gep-clusterip-gateway branch from 835e6a3 to 6a061ca Compare February 10, 2025 21:48
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Feb 10, 2025
@ptrivedi
Copy link
Author

Adding this comment here for tracking a few open items resulting from the comments on the google doc here: https://docs.google.com/document/d/1N-C-dBHfyfwkKufknwKTDLAw4AP2BnJlnmx0dB-cC4U/edit?tab=t.0

  1. Topology aware routing feature needs to be discussed and hashed out in detail. Features like internal/externalTrafficPolicy should then be appropriately morphed and provided as a part of topology aware routing
  2. EndpointSelector resource and DNS for Gateway topics warrant followup GEPs focused on these areas
  3. Headless, ExternalName, and other DNS functionality may warrant separate DNS API/Object. Subject to further discussion
  4. Need broader discussion around where do we implement this functionality, does it replace Service API completely in the long term and that we should have a migration plan, or does it become an underlying implementation for Service functionality allowing the simpler UX provided by Service API to be unchanged for end users while allowing advanced users to deal with Gateway API resources directly

@robscott @bowei @aojea @howardjohn @mskrocki

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. and removed cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Feb 11, 2025
@ptrivedi ptrivedi force-pushed the gep-clusterip-gateway branch 3 times, most recently from 1e793b0 to b5e81ee Compare February 12, 2025 15:17
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Feb 12, 2025
* Fix missing image
* Change GEP status to Memorandum
* Make GEP navigable
* Crop trailing whitespace from images

Signed-off-by: Pooja [email protected]
@ptrivedi ptrivedi force-pushed the gep-clusterip-gateway branch from b5e81ee to e876ced Compare February 12, 2025 15:35
@ptrivedi
Copy link
Author

/assign @thockin

Copy link

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First: LOVE IT

The questions I keep coming back to all are around how the node-proxy knows to pay attention to THIS gateway so it can implement the clusterIP or nodePort or externalTrafficPolicy or ...


### EndpointSelector as Backend

A Route can forward traffic to the endpoints selected via selector rules defined in EndpointSelector.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I can imagine a path toward maybe making this a regular core feature. I am sure that it would be tricky but I don't think it's impossible.

Eg.

Define a Service with selector foo=bar. That triggers us to create a PodSelector for foo=bar. That triggers the endpoints controller(s) to do their thing. Same as we do with IP.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting thought.

For starters at least, there seemed to be agreement on having a GEP for EndpointSelector as the next step.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As always, Gateway proves something is a good idea, then core steals the spotlight.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Define a Service with selector foo=bar. That triggers us to create a PodSelector for foo=bar. That triggers the endpoints controller(s) to do their thing.

FWIW NetworkPolicies also contain selectors that need to be resolved to Pods, and we've occasionally talked about how nice it would be if the selector-to-pod mapping could be handled centrally, rather than every NP impl needing to implement that itself, often doing it redundantly on every node.

I guess in theory, we could do that with EndpointSlice even, since kube-proxy will ignore EndpointSlices that don't have a label pointing back to a Service, so we could just have another set of EndpointSlices for NetworkPolicies... (EndpointSlice has a bunch of fields that are wrong for NetworkPolicy but most of them are optional and could just be left unset...)

Though this also reminds me of my theory that EndpointSlice should have been a gRPC API rather than an object stored in etcd. The EndpointSlice controller can re-derive the entire (controller-generated) EndpointSlice state from Services and Pods at any time, and it needs to keep all that state in memory while it's running anyway. So it should just serve that information out to the controllers that need it (kube-proxy, gateways) in an efficient use-case-specific form (kind of like the original kpng idea) rather than writing it all out to etcd.

(Alternate version: move discovery.k8s.io to an aggregated apiserver that is part of the EndpointSlice controller, and have it serve EndpointSlices out of memory rather than out of etcd.)

metadata:
name: cluster-ip
spec:
controllerName: "cluster-ip-controller"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this name "special" or can it be anything?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name can be anything but implementations must only reconcile GatewayClasses that has a controllerName that they expect. GatewayClass objects that do not match an implementation's controllerName must ignore that GatewayClass completely, and not update it at all (to prevent fighting on status).

Some implementations allow configuration of this string (for example, Contour allows it so that you can run multiple instances of Contour in a cluster).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that the behavior we want here? In Service, its a single object with (many) multiple controllers consuming it. If I want my service exposed to the CNI, kube-proxy, service mesh, observability platform, ... do I need to make N Gateways?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See expanded question under https://github.com/kubernetes-sigs/gateway-api/pull/3608/files#r1964558745

Agree with John's question, and I think it betrays a fundamental difference in perspective. I see this idea as "Services with a better API"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we're using the same object that can be used in other contexts though (ie Gateway), we need a way to disambiguate, and the way we have is GatewayClass. I'd be happy to see proposals around alternatives to GatewayClass, but I haven't seen anything to date that handles the problem that implementations of Gateway API almost always need multiple-namespace access, and the only currently available thing we have that's bigger than a single namespace is cluster-wide.

name: example-cluster-ip-gateway
spec:
addresses:
- 10.12.0.15
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does kube-proxy (or Cilium or Antrea or ...) know which Gateways it should be capturing traffic for?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normally that's handled by the rollup of Gateway -> GatewayClass. Implementations own GatewayClasses that specify the correct string in GatewayClass spec.controllerName. All Gateways in that GatewayClass in that GatewayClass would need to be serviced by an implementation that can fulfill this request (that is, it both has the required functionality, and, in this case of requesting a static address, is actually able to assign that address). In the case that an implementation cannot fulfil this Gateway for some reason, it must be marked as not Accepted (by having an Accepted type condition in the Gateway's status with status: false).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't tell if you are giving me a hard time or not :)

What I meant to ask is:

Service as a built-in API is (more or less) universally implemented by on-node agents (kube-proxy, cilium or antrea, ovn, etc). If we are trying to offer a form of ClusterIP Gateway which replaces part of the Service API, how does a user express "this is a cluster IP gateway" in a portable way such that all of the implementations know "this is for me"?

If each implementation has its own controllerName, and the GatewayClass can be named anything the cluster admin wants, how does our poor beleaguered app operator know what to put in their YAML?

Today they can say:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: ClusterIP
  selector:
    foo: bar
  ports:
  - port: 8080

...and be confident that ANY cluster, regardless of which CNI, will allocate a virtual IP and route traffic.

I'd like to write a generic tool which does:

for each service S in `kubectl get svc -A` {
    evaluate template with S to produce an equivalent Gateway
}

Copy link
Contributor

@youngnick youngnick Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, okay, I see the use case, but this is the problem with extensions v core - we left the flexibility there for implementations, (for good reason), and now we don't have a way to define a default GatewayClass at all, even for specific use cases.

I think that practically, a tool like you describe would need to know the gatewayclass it was targeting, and output Gateways based on that.

We could conceivably have a convention and pick a reserved name (like cni-clusterip or something), but we've been reluctant in the past to do that, preferring the increased specificity of requiring people to specify something (even though there is a friction cost to be paid there).

(And I wasn't trying to give you a hard time - I have details get pushed out of my head all the time, so wanted to make sure this hadn't happened here. 😄 But also, I wanted to help other readers understand too)

Copy link

@thockin thockin Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that practically, a tool like you describe would need to know the gatewayclass it was targeting,

Hence my questions about "is this name special". One answer is "thou shalt use the name 'clusterip' and the 'clusterip' is the name thou shalt use", and just hope not to collide with users. Another answer is to define a sub-space of names that users can't currently use, or are exceedingly unlikely to be using e.g. k8s.io:clusterip. This is an appropriate place to ideate, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since 1.33 you can use the IPAddress object to represent an unique IP address in the cluster

Copy link
Contributor

@aojea aojea Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

official names looks like a good idea, but I do not think we should make this exclusive, we already have "service.kubernetes.io/service-proxy-name" for Services, so it makes sense we may consider multiple implementations of clusterIP , so we can delegate our prefix to indicate that this is a Service IP --- the relation with the IPAddress object will guarantee the consistency ...IPAddress already has a reference field and a managed-by label

I think my strawman approach is:

  • gateway class prefixed with clusterip.kubernetes.io/kube-proxy or clusterip.kubernetes.io/cilium,antrea,ovn-kubernetes
  • the gateway allocates the corresponding IPAddress on the cluster to avoid conflicts
                            &networking.IPAddress{
				ObjectMeta: metav1.ObjectMeta{
					Name: "192.168.2.2",
             				Labels: map[string]string{"ipaddress.kubernetes.io/managed-by":"kube-proxy",
				},
				Spec: networking.IPAddressSpec{
					ParentRef: &networking.ParentReference{
						Group:     "gateway.networking.k8s.io",
						Resource:  "gateway",
						Name:      "foo",
						Namespace: "bar",
					},
				},
			},

Copy link
Contributor

@mmamczur mmamczur Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is even more complicated for type LoadBalancer

currently for services and passthrough LBs, part of the setup belongs to kube-proxy, cilium etc (routing on the nodes) and part to the cloud providers. You also have the .loadBalancerClass (Service API field) field so that you can instruct the LB controller to provision a specific kind of LB on the cloud provider side.
Wouldn't it be best to leave the GatewayClass to be equivalent to loadBalancerClass in this case? There would need to be something that instructs kube-proxy to do the routing on it's side.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ptrivedi
Once this PR has been reviewed and has the lgtm label, please ask for approval from thockin. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

{% include 'standard/clusterip-gateway/tcproute-with-endpointselector.yaml' %}
```

The EndpointSelector object is defined as follows. It allows the user to specify which endpoints
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to have a config field so we can have implementation specific parameters?
For example, if I create a Layer 3/4 load balancer type of gateway, I would like to express how the traffic will be distributed (what algorithm is being used), the maximum number of endpoints and from where IP addresses should be selected (ResourceClaim (Multi-Network), Annotations (Multus)...).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to have a config field so we can have implementation specific parameters?

Yes, I think something like config would be needed.

For example, if I create a Layer 3/4 load balancer type of gateway, I would like to express how the traffic will be distributed (what algorithm is being used),

How the traffic will be distributed seems to be more of a Route level config than EndpointSelector level config. e.g. BackendRefs already have a weight field today

the maximum number of endpoints and from where IP addresses should be selected (ResourceClaim (Multi-Network), Annotations (Multus)...).

publishNotReadyAddresses could be one more thing that could go here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think something like config would be needed.

The Gateway has a similar field in .spec.infrastructure.parametersRef pointing to another object holding the configuration (e.g. configmap).
Otherwise, it is also possible to use runtime.RawExtension to embed arbitrary parameters in the EndpointSelector.
DRA uses it for example: https://github.com/kubernetes/api/blob/release-1.33/resource/v1beta2/types.go#L1032

How the traffic will be distributed seems to be more of a Route level config than EndpointSelector level config. e.g. BackendRefs already have a weight field today

Not sure. To me, this is a combination of both Route and Backend (Service, EndpointSelector...). The Route steers traffic to backends via some characteristics (L7 (HTTP...), L3/L4 (IPs, Ports, protocols)) and the backend (Service, EndpointSelector...) defines how to distribute it (Load-Balance it over a set of IPs).

publishNotReadyAddresses could be one more thing that could go here

Yes, to me publishNotReadyAddresses would also make sense there in the EndpointSelector.

configure cluster-ip and node-port when configuring a load-balancer.

But for completeness, the case shown below demonstrates how load balancer functionality analogous to
LoadBalancer Service API can be achieved using Gateway API.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the example from the image uses load-balancer as the class. The cloud providers usually have a few variants of LBs and preferably these would have their own classes

but these would be somewhat unique since the cloud provider controller would act on it but also kubeproxy, cilium and others would need to do some setup on their side. Maybe we could set something on the GatewayClass that would indicate it's a L4 LB class so that the node networking part has to treat it as a LB?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the example from the image uses load-balancer as the class. The cloud providers usually have a few variants of LBs and preferably these would have their own classes

but these would be somewhat unique since the cloud provider controller would act on it but also kubeproxy, cilium and others would need to do some setup on their side. Maybe we could set something on the GatewayClass that would indicate it's a L4 LB class so that the node networking part has to treat it as a LB?

Linking a similar discussion: #3608 (comment)

@gauravkghildiyal
Copy link
Member

/cc

@bowei
Copy link
Contributor

bowei commented Apr 21, 2025

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Apr 21, 2025
@bowei
Copy link
Contributor

bowei commented Apr 21, 2025

/assign

* Change GEP status to Provisional
* Fix indentation in Route yaml
* Fix optional-selection notation
@ptrivedi ptrivedi force-pushed the gep-clusterip-gateway branch from 2839a40 to 741292c Compare April 30, 2025 15:51
Copy link

@danwinship danwinship left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

late to the party...

@@ -0,0 +1,240 @@
# GEP-3539: ClusterIP Gateway - Gateway API to Expose Pods on Cluster-Internal IP Address

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might have started out as "ClusterIP Gateways" but at this point it's really more like "Service-equivalent functionality via Gateway API".


## Goals

* Define Gateway API usage to accomplish ClusterIP Service style behavior

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beyond the fact that it's not just ClusterIP, I think there are at least 3 use cases hiding in that sentence.

  1. "Gateway as new-and-improved Service" - Providing an API that does generally the same thing that v1.Service does, but in a cleaner and more orthogonally-extensible way, so that when people have feature requests like "I want externalTrafficPolicy: Local Services without allocating healthCheckNodePorts" (to pick the most recent example), they can do that without us needing to add Yet Another ServiceSpec Flag.
  2. "Gateway as a backend for v1.Service" - Providing an API that can do everything that v1.Service can do (even the deprecated parts and the parts we don't like), so that you can programmatically turn Services into Gateways and then the backend proxies/loadbalancers/etc would not need to look at Service objects at all.
  3. "MultiNetworkService" - Providing an API that lets users do v1.Service-equivalent things in multi-network contexts.

The GEP talks about case 2 some, but it doesn't really explain why we'd want to do that (other than via the link to Tim's KubeCon lightning talk).


### EndpointSelector as Backend

A Route can forward traffic to the endpoints selected via selector rules defined in EndpointSelector.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Define a Service with selector foo=bar. That triggers us to create a PodSelector for foo=bar. That triggers the endpoints controller(s) to do their thing.

FWIW NetworkPolicies also contain selectors that need to be resolved to Pods, and we've occasionally talked about how nice it would be if the selector-to-pod mapping could be handled centrally, rather than every NP impl needing to implement that itself, often doing it redundantly on every node.

I guess in theory, we could do that with EndpointSlice even, since kube-proxy will ignore EndpointSlices that don't have a label pointing back to a Service, so we could just have another set of EndpointSlices for NetworkPolicies... (EndpointSlice has a bunch of fields that are wrong for NetworkPolicy but most of them are optional and could just be left unset...)

Though this also reminds me of my theory that EndpointSlice should have been a gRPC API rather than an object stored in etcd. The EndpointSlice controller can re-derive the entire (controller-generated) EndpointSlice state from Services and Pods at any time, and it needs to keep all that state in memory while it's running anyway. So it should just serve that information out to the controllers that need it (kube-proxy, gateways) in an efficient use-case-specific form (kind of like the original kpng idea) rather than writing it all out to etcd.

(Alternate version: move discovery.k8s.io to an aggregated apiserver that is part of the EndpointSlice controller, and have it serve EndpointSlices out of memory rather than out of etcd.)

apiVersion: networking.gke.io/v1alpha1
kind: EndpointSelector
metadata:
name: front-end-pods

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably want this to work the same way EndpointSlice does, where the name is not meaningful (so as to avoid conflicts), and there's a label (or something) that correlates it with its Service

| ipFamily | IPv4 <br /> IPv6 | Route level |
| publishNotReadyAddresses | True <br /> False | Route or EndpointSelector level |
| ClusterIP (headless service) | IPAddress <br /> None | GatewayClass definition for Headless Service type |
| externalName | External name reference <br /> (e.g. DNS CNAME) | GatewayClass definition for ExternalName Service type |

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • sessionAffinity - As noted elsewhere, this not implemented compatibly by all service proxies. It's also not implemented by many LoadBalancers because historically we have mostly not done any e2e testing for non-GCE LoadBalancers.
  • externalIPs - bad alternative implementation of LoadBalancers. Needed for "exactly equivalent to Service" Gateways but not wanted for "similar to Service" Gateways.
  • externalTrafficPolicy: Local - overly-opinionated combined implementation of two separate features (preserve source IP / route traffic more efficiently). We should do this better for the "similar to Service" case.
  • publishNotReadyAddresses - is this just an early attempt to solve the problem that was later solved better by ProxyTerminatingEndpoints?

Not mentioned here:

  • trafficDistribution - I'm not sure what Gateway already has for topology, but this is definitely something that should be exposed generically.

@youngnick
Copy link
Contributor

I still haven't had the bandwidth to come back and give this a full, proper pass, but I did want to point out that, while this PR is currently targeting "Provisional" status, which isn't bound by Gateway API's release cycle, if you did want to look at moving this to Experimental (and thus, having something be implementable) this year, an item needs to be added to the Scoping discussion at #3760 to cover including it there.

If folks don't feel there will be bandwidth to push this forward, we can concentrate on getting this into Provisional in the v1.4 timeframe, then look at Experimental for v1.5.

@ptrivedi
Copy link
Author

I still haven't had the bandwidth to come back and give this a full, proper pass, but I did want to point out that, while this PR is currently targeting "Provisional" status, which isn't bound by Gateway API's release cycle, if you did want to look at moving this to Experimental (and thus, having something be implementable) this year, an item needs to be added to the Scoping discussion at #3760 to cover including it there.

If folks don't feel there will be bandwidth to push this forward, we can concentrate on getting this into Provisional in the v1.4 timeframe, then look at Experimental for v1.5.

Yes, there will not be bandwidth to push this forward during this cycle, hence I did not add anything to the scoping discussion. Also there are open questions that need to be addressed and discussion areas to be kicked off.

@ptrivedi
Copy link
Author

I still haven't had the bandwidth to come back and give this a full, proper pass, but I did want to point out that, while this PR is currently targeting "Provisional" status, which isn't bound by Gateway API's release cycle, if you did want to look at moving this to Experimental (and thus, having something be implementable) this year, an item needs to be added to the Scoping discussion at #3760 to cover including it there.

If folks don't feel there will be bandwidth to push this forward, we can concentrate on getting this into Provisional in the v1.4 timeframe, then look at Experimental for v1.5.

@youngnick target would have to be Provisional in the v1.4 timeframe, given bandwidth constraints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. kind/gep PRs related to Gateway Enhancement Proposal(GEP) ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

GEP: Gateway API to Expose Pods on Cluster-Internal IP Address (ClusterIP)