There is an unsaved comment in progress. You will lose your changes if you continue. Are you sure you want to reopen the work item?
Memoization Fails for large input streams
When the input stream become large (~1.1M nodes without children, or ~6K nodes with children) the ID used to track memos in the memo table ceases to be unique. This causes "garbage" production matches with no error reported.
Attached is a patch that uses a first-class key to identify the memos => survives largish inputs. It also places a test after memo retrieval to verify and fail if the memo does not refer to the expected input position. This can be removed once confidence