Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in UI when using OData API #200

Open
jantoineqci opened this issue Jun 30, 2023 · 3 comments
Open

Error in UI when using OData API #200

jantoineqci opened this issue Jun 30, 2023 · 3 comments

Comments

@jantoineqci
Copy link

Manually entering a new purchase order or updating lines on existing purchase results in the error.
Validation with RED X and this error: "Results: Sorry, we just updated this page. Reopen it and try again."
Error2
or sometimes like this
Error1
The user must exit the line, refresh the document, and then re-enter the line.
This has been happening consistently and constantly throughout the months. It only happens in environments in which we are pulling data from the OData API v2. I think it has to do with the way BC syncs the data from the Purchase Header/Line table to the Purchase Entity Buffer/Line Aggregate. I tried submitting a support think to Microsoft, but they were not helpful and suggested I post something here.
If anyone can help or point me in the direction of capture more data around this error, it would be appreciated.

@JDannellHBP
Copy link

JDannellHBP commented Aug 14, 2023

I know this issue is a little old now but a customer of ours was having the exact same issue that I only resolved a couple of weeks ago. Our use of the API didn't require us to write any data back to BC so was able to go to the Database Access Intent List and for the API V2 pages I was using (Sales Quote and Sales Quote Lines, which like your tables are also sourced from buffer tables) and change their access intent to ReadOnly. There is a caveat to this that I go into below, but for my implementation was pretty painless. Changing this solved the issue almost instantly, particularly frustrating since I'd been going crazy looking at BC telemetry for hours and hours, but lesson learnt I guess.

A better method would be to add the Data-Access-Intent header to your GET requests, as when you change the default behaviour in the Database Access Intent List to ReadOnly, you'll get an error if you ever send a write request to that endpoint. Due to some middleware we were using it wasn't possible to set this header to ReadOnly, so my only option was to override the default behaviour for the entire endpoint, which fortunately wasn't an option.

@jantoineqci
Copy link
Author

Thank you for your reply. It is interesting that adding Data-Access-Intent: ReadOnly helps because you would think the issue is that pulling the data is modifying the record so when you have Data-Access-Intent: ReadOnly the GET call would fail with an error like "Cannot modify record when Data-Access-Intent is ReadOnly." Another issue is Data-Access-Intent: ReadOnly, it is supposed to go against a read only replication of the data. In which can be out of sync from the main database. I don't know what the lag is and I haven't found any Microsoft documentation on it as far as guaranteed time range. What we did was create new API endpoints (for pulling data only) that goes against the Purchase Header/Line table directly.

@JDannellHBP
Copy link

JDannellHBP commented Aug 14, 2023

To be honest, I'm not entirely sure. Unless certain codeunits or write operations relating to GET requests still function? It is an interesting question, as reports that write back to the database can't use ReadOnly access intent.

Could it be that the buffer tables only take read lock against the base Header/Line tables, leading some sort of JIT or consistency error when modifying the line so forcing the user to refresh the client as it cannot be sure that the process that took the read lock won't later in the transaction modify it? The error dialog is completely unhelpful in this regard, and really should be changed to be more specific, at least in this situation.

I too wasn't able to find any documentation on the lag for the replication, though in my experience it seems to be negligible. Good to see that you found a different solution though, I'd already built the data integration in question for a project that had gone live so was reticent to significantly change the endpoint I was using.

For future projects I'll use your solution I think, as until reading your original post the link between the population of the buffer table data affecting the base tables hadn't occurred to me. As in my mind I was only grabbing data, admittedly at high frequency, but with highly selective filters and few selected columns. Cutting out the buffer tables entirely seems like the safer solution, since I can't with any real certainty work out why they would cause such a problem in the first place.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants