Hello everyone. I have a conda environment I am setting up with kubernetes and pyspark. The initial issue is derived from the below, when trying to run pyspark.
(kpyEnv) C:\WINDOWS\system32>pyspark
The system cannot find the path specified.
After much digging in places like here , I figured it was a dependent item or environment variable. I verified the downloads, and that my environment variables for Hadoop (HADOOP_HOME), java (JAVA_HOME), and spark (SPARK_HOME) were all correct in the system. But when I echo JAVA_HOME inside of my conda environment, it gives me a path to \myEnvironmentName\Library
I am unsure why. I tried manually setting the path inside of the environment but despite the output in both a standard command prompt as well as conda-meta\state being the same as my Windows environment variable path, the “Echo %JAVA_HOME%” command still results in leading to the environment library. I also get a warning when starting about “overwriting environment variables set in the machine overwriting variable {‘JAVA_HOME’}” on conda activate, and yet it’s still wrong.
After hours of troubleshooting I created an account here (after browsing some topics, yes. I found a similar one possibly that went unanswered but that was it.)