Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lifespan events in sub-applications #649

Open
octopoulpe opened this issue Sep 30, 2019 · 12 comments
Open

lifespan events in sub-applications #649

octopoulpe opened this issue Sep 30, 2019 · 12 comments
Milestone

Comments

@octopoulpe
Copy link

octopoulpe commented Sep 30, 2019

Hi.
I was wondering if sub-applications (mounted on the main application) should receive startup/shutdown events and call their own handlers.
I saw it is not the case at the moment. Should we document it or implement the dispatching of lifespan events ?

Important

  • We're using Polar.sh so you can upvote and help fund this issue.
  • We receive the funding once the issue is completed & confirmed by you.
  • Thank you in advance for helping prioritize & fund our backlog.
Fund with Polar
@gbozee
Copy link
Contributor

gbozee commented Oct 13, 2019

Assuming the sub applications are connected using the Router, startup/shutdown events can be implemented as follows

app = Router([Mount("/api", app=service_app), Mount("", app=root_app)])


async def open_database_connection_pool():
    await service_app.service.connect()


async def close_database_connection_pool():
    await service_app.service.disconnect()


app.lifespan.add_event_handler("startup", open_database_connection_pool)
app.lifespan.add_event_handler("shutdown", close_database_connection_pool)

@octopoulpe
Copy link
Author

octopoulpe commented Oct 14, 2019

Hi.
I chose to manage that through the main application handlers.

app = Starlette()

sub_applications = {"/cart": cart_app, "/account": account_app, etc...}
for route, sub_app in sub_applications.items():
    app.mount(route, sub_app)
    
@app.on_event("startup")
async def startup():
    for sub_app in sub_applications.values():
        await sub_app.router.lifespan.startup()

The same for shutdown. As long as you do not go all 'subsubapplication', I think it is quiet expressive as, in my case, sub application don't know they are "mounted" somewhere.

Shouldn't we, at least, document it ?

@tomchristie
Copy link
Member

I was wondering if sub-applications (mounted on the main application) should receive startup/shutdown events and call their own handlers.

Ideally, yes.

@tomchristie
Copy link
Member

tomchristie commented Nov 4, 2019

Having taken a look at this it's really quite awkward.

You can't run each submount off the same lifespan task - instead each would need to have their own sub-task. Then you need to make sure you're handling the case where lifespan events are not supported and deal with exception cases from each submount.

At the moment I think we need to just treat it as out-of-scope.

But yeah, we do need to document that.

@sm-Fifteen
Copy link

Having applications that may or may not behave properly depending on whether they're mounted at the root of the server or as a sub-application due to sub-apps not receiving lifespan events is probably going to cause issues if using #799 context-managers becomes the recommended way of handling application-level resources instead of global scope like the doc currently advises. It's one of my main worries in regards to fastapi/fastapi#617, because it makes applications that rely on lifetime events for initialization non-composable.

Does that issue look any less awkward to fix now than it used to?

@adriangb
Copy link
Member

adriangb commented Apr 3, 2022

I think that if we deprecated the non-contextmanager lifespans we should be able to implement lifespan propagation.

I see 2 ways to go about it:

  1. The top-level router (which receives the ASGI lifespan event) "crawls" all of it's routes, submounts, etc. This can technically be done with no API changes since Router.lifespan is a public attribute and so are Router.routes, Mount.app, etc.
  2. We add an async def lifespan(self, app: ASGIApp) -> AsyncContextManager[None] method to every routing class and have them propagate the lifespan.

TLDR when propagating the lifespans, skip the whole ASGI aspect of of it and instead just call the context managers directly.

@adriangb
Copy link
Member

adriangb commented Sep 3, 2022

Coming back to this a couple months later. I don't agree with myself above anymore. I think it is possible to do this in pure ASGI. That is not to say that introspecting into routers and mounts (and maybe adding a public lifespan() context manger) won't work, it will. But I've also been able to get this to work with pure ASGI.

Some concrete examples to look at:

  1. Introspection-based. Xpresso currently uses this approach. The main con with this approach is that whatever is introspecting needs to have knowledge of the routing classes. If everything is coming from Starlette, it kinda just works. But if you mount a non-Starlette app, this will not work.
  2. Pure ASGI. It's tricky, but I think it's possible to write an ASGI app that "propagates" lifespan events. I packaged this approach up as a middleware, but I think the Starlette application could do the same thing: https://github.com/adriangb/asgi-lifespan/blob/main/asgi_lifespan_middleware/_middleware.py. It's certainly not simple code, but I think it works (please poke holes in it if you can). The huge pro of this is that it works with any ASGI app. But what about routers? A router is a 1 to many mapping from a containing app to contained routes. Those routes can be mounted apps. So we don't just need to run the lifespan of a single mounted app: we need to run lifespans for multiple mounted apps. Surprisingly, a similar approach seems to work: https://github.com/adriangb/asgi-routing/pull/1/files#diff-16a5f890ec0ac83174fb209a886f9149b9ac775a62254d5f958737701565601e.

At this point, it seems like a tradeoff between complexity and functionality. The simplest thing to do is just not support lifespans for mounted apps at all. The least complex (in terms of LOC at least) would be to introspect into the routing system and pull-out lifespans from routing primitives we recognize. The most complex version would be where Mount does something like this and Router does something like this.

@chickeaterbanana
Copy link

I tried a solution as well. The solution seems to work quite well as far as I can see. I solved following challenges for me:

  • support event and context at the same time
  • support lifespan propagation
  • support multiple lifespan contextmanagers

The solution can be found here: bdc9026

@verigle
Copy link

verigle commented Dec 6, 2023

app = Router([Mount("/api", app=service_app), Mount("", app=root_app)])


async def open_database_connection_pool():
    await service_app.service.connect()


async def close_database_connection_pool():
    await service_app.service.disconnect()


app.lifespan.add_event_handler("startup", open_database_connection_pool)
app.lifespan.add_event_handler("shutdown", close_database_connection_pool)

how to import Router?

@mattmess1221
Copy link

mattmess1221 commented May 3, 2024

Here's a lifespan I wrote as a workaround to this. It supports both Host and Mount route types.

@contextlib.asynccontextmanager
async def mounted_lifespan(app: Starlette) -> AsyncGenerator[Mapping[str, Any], Any]:
    async with contextlib.AsyncExitStack() as stack:
        state = {}
        for r in app.routes:
            if isinstance(r, Host | Mount) and isinstance(r.app, Starlette | Router):
                router = r.app
                if isinstance(router, Starlette):
                    router = router.router
                result = await stack.enter_async_context(router.lifespan_context(r.app))
                if result is not None:
                    state.update(result)

        yield state

Caveats:

  • It only supports Starlette or Router mount types.
  • All state is merged into the root app, which may cause conflicts.
  • Will not trigger middleware lifespan. It would cause the event lifespan.startup.complete to be sent multiple times.

Full example:

# mounted app's lifespan
@contextlib.asynccontextmanager
async def test_lifespan(app: Starlette) -> AsyncGenerator[dict, Any]:
    print("Hello, lifespan")
    yield {"data": "stuff"}


# mounted app endpoint which depends on state created by the lifespan
async def test_endpoint(request: Request) -> Response:
    return PlainTextResponse(request.state.data)


# child app with lifespan
test_app = Starlette(
    routes=[
        Route("/", test_endpoint),
    ],
    lifespan=test_lifespan
)

# root app with mounted apps
app = Starlette(
    routes=[
        Mount("/test_app", test_app),
    ],
    lifespan=mounted_lifespan,
)

@mattmess1221
Copy link

mattmess1221 commented May 3, 2024

Converting it to a middleware solves caveat 1. Doing some magic with asyncio.Queue and wrapping send/receive solves caveat 3.

class MountedLifespanMiddleware:
    def __init__(self, app: ASGIApp) -> None:
        self.app = app

    async def __call__(self, scope: Scope, receive: Receive, send: Send) -> Any:
        if scope["type"] == "lifespan":
            queues: list[Queue[Message]] = []

            async def wrap_receive() -> Message:
                msg = await receive()
                async with anyio.create_task_group() as tasks:
                    for queue in queues:
                        tasks.start_soon(queue.put, msg)
                return msg

            async def wrap_send(message: Message) -> None:
                if message["type"] == "lifespan.startup.complete":
                    return
                await send(message)

            async with anyio.create_task_group() as tasks:
                app = scope.get("app")
                if isinstance(app, Starlette):
                    for r in app.routes:
                        if isinstance(r, Mount | Host):
                            queues.append(queue := Queue())
                            tasks.start_soon(r.app, scope, queue.get, wrap_send)

                await self.app(scope, wrap_receive, send)
                return

        await self.app(scope, receive, send)

@Cyberbolt
Copy link

Solution:

from fastapi import FastAPI
from contextlib import asynccontextmanager, AsyncExitStack

sub_app = FastAPI()

@asynccontextmanager
async def sub_lifespan(app: FastAPI):
    print("sub app is starting up...")
    yield
    print("sub app is shutting down...")

# Use AsyncExitStack to manage context
async def lifespan(app: FastAPI):
    async with AsyncExitStack() as stack:
        # Manage the lifecycle of sub_app
        await stack.enter_async_context(sub_lifespan(sub_app))
        yield

app = FastAPI(lifespan=lifespan)

# Mount the sub application
app.mount("/sub", sub_app)

# Define the main application's route
@app.get("/")
async def read_main():
    return {"message": "This is the main app"}

# Define the sub application's route
@sub_app.get("/")
async def read_unipay():
    return {"message": "This is the unipay app"}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants