Best practices for large, shared Python Datasets?

@OwenPrice Good to know on the timeout thing - I didn’t realize that setting existed - I’ll look into it, thoughit won’t solve the issue of running out of compute time. In terms of your second question, for now the datasource is just a hard-coded table on one of the sheets in the workbook. I figured that would be easiest/fastest; and in fairness the problem doesn’t seem to be associated with crunching the historical data - that still just takes a matter of a couple seconds. As for optimizing the code, I’ll see if I can distill some of what I’ve got down a bit to where I can attach it in a subsequent reply. ChatGPT (lol) has helped a fair bit already and, in fact, the code runs quite efficiently from the command line version I created.

@Jim_Kitchen So far I have tried using linked mode, because I had originally coded it in =PY() where I don’t have the option to run it in isolated mode. The problem, as I noted before, is for some reason sometimes Excel seems to decide it needs to recalculate all cells even while Excel is still calculating the first set (ie even without being triggered by a cell change), and this seems to occur with both Anaconda Code as well as =PY(). My next step is to try running it in isolated mode, but then I run into the same issue I posted about here: #VALUE! error for cells with a dict over a certain size - Python in Excel / Anaconda Code - Anaconda Community