Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman compose - broken log/trace to disk when using the supplied build/config_example.toml #397

Open
jwmatthews opened this issue Sep 27, 2024 · 1 comment
Assignees
Labels
bug Something isn't working
Milestone

Comments

@jwmatthews
Copy link
Member

We've recently broken logging and tracing to disk when run via podman compose up.
We are NOT writing to the shared directory which is accessible by both the container AND the host.

Below is a snippet of configuration

kai_1               | Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.TEXT_ONLY: 'text_only'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='kai_db', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'mistralai/mixtral-8x7b-instruct-v01', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
kai_1               | Console logging for 'kai' is set to level 'INFO'
kai_1               | File logging for 'kai' is set to level 'DEBUG' writing to file: '/kai/logs/kai_server.log'

Note: File logging for 'kai' is set to level 'DEBUG' writing to file: '/kai/logs/kai_server.log'

When running from within the container we need the logging directory to be: /podman_compose/logs
Which is set as KAI__LOG_DIR: "/podman_compose/logs" from https://github.com/konveyor/kai/blob/main/compose.yaml

I am using the build/config_example.toml we have for an example
My steps were:

  1. cp build/config_example.toml build/config.toml
  2. Build the latest code into a local image with tag 'local'
  3. TAG="local" podman compose up

Example of the config.toml I used:

$ cat build/config.toml
log_level = "info"
file_log_level = "debug"
log_dir = "$pwd/logs"
demo_mode = false
trace_enabled = true

# **Solution consumers** This controls the strategies the LLM uses to consume
# solutions.
# - "diff_only": consumes only the diff between the the initial and solved
#   states
# - "llm_summary": If the incident store's `solution_producers` field is
#   configured to use "llm_lazy", the solution used will include an
#   llm-generated summary of the changes required.

solution_consumers = ["diff_only", "llm_summary"]

[incident_store]

# **incident_store.solution_detectors** This controls the strategies the
# incident store uses to detect when an incident was solved.
# - "naive": Incidents are considered the same if every field in the incident is
#   the exact same.
# - "line_match": Same as "naive", except we take into account if the line has
#   moved about in the file.

solution_detectors = "naive"

# **incident_store.solution_producers** This controls the strategies the
# incident store uses to produce incidents.
# - "text_only": Only the textual information (diff, code before, code after) of
#   the incident is stored.
# - "llm_lazy": Same as "text_only", but will earmark the solution for LLM
#   summary generation upon retrieval.

solution_producers = "text_only"

# Only set this if you want to use a different incident store than the default.
# If you are running it using podman compose, you should probably leave this
# alone.

# **Postgresql incident store**
# ```
# [incident_store.args]
# provider = "postgresql"
# host = "127.0.0.1"
# database = "kai"
# user = "kai"
# password = "dog8code"
# ```

# **In-memory sqlite incident store**
# ```
# [incident_store.args]
# provider = "sqlite"
# connection_string = "sqlite:///:memory:"
# ```

[models]
provider = "ChatIBMGenAI"

[models.args]
model_id = "mistralai/mixtral-8x7b-instruct-v01"

# **IBM served granite**
# ```
# [models]
#   provider = "ChatIBMGenAI"

#   [models.args]
#   model_id = "ibm/granite-13b-chat-v2"
# ```

# **IBM served mistral**
# ```
# [models]
#   provider = "ChatIBMGenAI"

#   [models.args]
#   model_id = "mistralai/mixtral-8x7b-instruct-v01"
# ```

# **IBM served codellama**
# ```
# [models]
#   provider = "ChatIBMGenAI"

#   [models.args]
#   model_id = "meta-llama/llama-2-13b-chat"
# ```

# **IBM served llama3**
# ```
#   # Note:  llama3 complains if we use more than 2048 tokens
#   # See:  https://github.com/konveyor-ecosystem/kai/issues/172
# [models]
#   provider = "ChatIBMGenAI"

#   [models.args]
#   model_id = "meta-llama/llama-3-70b-instruct"
#   parameters.max_new_tokens = 2048
# ```

# **Ollama**
# ```
# [models]
#   provider = "ChatOllama"

#   [models.args]
#   model = "mistral"
# ```

# **OpenAI GPT 4**
# ```
# [models]
#   provider = "ChatOpenAI"

#   [models.args]
#   model = "gpt-4"
# ```

# **OpenAI GPT 3.5**
# ```
# [models]
#   provider = "ChatOpenAI"

#   [models.args]
#   model = "gpt-3.5-turbo"
# ```

# **Amazon Bedrock served Anthropic Claude 3.5 Sonnet **
# ```
# [models]
# provider = "ChatBedrock"

# [models.args]
# model_id = "anthropic.claude-3-5-sonnet-20240620-v1:0"
# ```

# **Google Gemini Pro**
# [models]
# provider = "ChatGoogleGenerativeAI"

# [models.args]
# model = "gemini-pro"
@jwmatthews jwmatthews added the bug Something isn't working label Sep 27, 2024
@jwmatthews
Copy link
Member Author

Issue was:

  • I used the example build/config_example.toml we supplied and copied to build/config.toml
  • The example we supply has the line: log_dir = "$pwd/logs"
  • We have changed precedence rules so that the config file passed in as command line arguments takes precedence over environment variables.

Example of what I changed to allow writing to disk from the container to the shared location in the host filesystem

$ cat build/config.toml
log_level = "info"
file_log_level = "debug"
#log_dir = "$pwd/logs"
demo_mode = false
trace_enabled = true

I commented out the log_dir entry so that it uses what we supply via compose.yaml (which sets an environment variable)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: In Progress
Development

No branches or pull requests

2 participants