-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(ValidationException) when calling the InvokeModel operation: Malformed input request: expected minLength: 1, actual: 0, please reformat your input and try again #279
Comments
Can you provide the prompts? You'll find them by enabling metadata, using the settings gear in the UI bottom right |
I have noticed that the issue occurs when the followup question is very short e.g. thanks, bye, ok bye etc. |
I am able to reduce the occurrence of this by using following code for rephrasing the question
|
I have started getting this issue again with increased frequency now. Any suggestions regarding the fix please? |
@ajaylamba-provar i am getting the same issue using this condensed prompt do you have any workaround? |
@roselle11111 The workaround I applied was to fine tune the condensation prompt to make sure that the rephrased question is never blank. |
Getting below error sometimes in continued conversations.
"message":"All records failed processing. 1 individual errors logged separately below.\n\nTraceback (most recent call last):\n File \"/opt/python/aws_lambda_powertools/utilities/batch/base.py\", line 500, in _process_record\n result = self.handler(record=data)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/aws_lambda_powertools/tracing/tracer.py\", line 678, in decorate\n response = method(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/var/task/index.py\", line 122, in record_handler\n handle_run(detail)\n File \"/var/task/index.py\", line 95, in handle_run\n response = model.run(\n ^^^^^^^^^^\n File \"/var/task/adapters/base/base.py\", line 170, in run\n return self.run_with_chain(prompt, workspace_id)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/var/task/adapters/base/base.py\", line 107, in run_with_chain\n result = conversation({\"question\": user_prompt})\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/langchain/chains/base.py\", line 310, in __call__\n raise e\n File \"/opt/python/langchain/chains/base.py\", line 304, in __call__\n self._call(inputs, run_manager=run_manager)\n File \"/opt/python/langchain/chains/conversational_retrieval/base.py\", line 148, in _call\n docs = self._get_docs(new_question, inputs, run_manager=_run_manager)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/langchain/chains/conversational_retrieval/base.py\", line 305, in _get_docs\n docs = self.retriever.get_relevant_documents(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/langchain/schema/retriever.py\", line 211, in get_relevant_documents\n raise e\n File \"/opt/python/langchain/schema/retriever.py\", line 204, in get_relevant_documents\n result = self._get_relevant_documents(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/genai_core/langchain/workspace_retriever.py\", line 13, in _get_relevant_documents\n result = genai_core.semantic_search.semantic_search(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/genai_core/semantic_search.py\", line 25, in semantic_search\n return query_workspace_open_search(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/genai_core/opensearch/query.py\", line 48, in query_workspace_open_search\n query_embeddings = genai_core.embeddings.generate_embeddings(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/genai_core/embeddings.py\", line 28, in generate_embeddings\n ret_value.extend(_generate_embeddings_bedrock(model, batch))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/genai_core/embeddings.py\", line 88, in _generate_embeddings_bedrock\n response = bedrock.invoke_model(\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/botocore/client.py\", line 535, in _api_call\n return self._make_api_call(operation_name, kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/aws_xray_sdk/ext/botocore/patch.py\", line 38, in _xray_traced_botocore\n return xray_recorder.record_subsegment(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/aws_xray_sdk/core/recorder.py\", line 456, in record_subsegment\n return_value = wrapped(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/python/botocore/client.py\", line 983, in _make_api_call\n raise error_class(parsed_response, operation_name)\nbotocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: expected minLength: 1, actual: 0, please reformat your input and try again.\n",
Could be related to
get_condense_question_prompt
where questions are rephrased.I am using Claude 2.1 for responses and Amazon Titan for embeddings and OpenSearch for storage.
The text was updated successfully, but these errors were encountered: