On Mar 09, Hans Hagen wrote:
On 3/9/2015 10:50 PM, Alan BRASLAU wrote:
This can be done once and is much better than getting ConTeXt to convert every time on the fly.
Do you really think that we let context convert a big file each run in a critical workflow? Context will only resample when an image file changed (which happens when you have to process from repositories updated by authors).
I guess that's a question for me ? of course I don't plan to run context on the big files every time -- as I know that that's still way too slow;) but it would be nice if I don't need magic external tools to guess the correct physical print size, calculate the actual number of pixels for e.g. 300 dpi, etc. I already have script which extracts all JPGs from a context's pdf using pdfimage, and then rename them to the "input" file names according to the log file. right now I shrink the PDFs using 'gs -dPDFSETTINGS="/screen" ...' and then extract a copy of the small images into some ...-screen/ directory and use them for all futur context runs. but using gs or ImageMagick's convert plus some magic log file reading/calculation is a bit ugly, while context knows all the details about physical image size, native pixel size of input jpgs etc. so that context perfectly could do the job if asked to (and without huge intermediate PDFs)... finally I like the idea to be able to create the final PDF in a "single" context run from *original* source (== original JPGs) without lots of additional tools and steps/scripts (or if I want to start over "form scratch" from input files). just an idea... (more to come;-) Harald -- "I hope to die ___ _____ before I *have* to use Microsoft Word.", 0--,| /OOOOOOO\ Donald E. Knuth, 02-Oct-2001 in Tuebingen. <_/ / /OOOOOOOOOOO\ \ \/OOOOOOOOOOOOOOO\ \ OOOOOOOOOOOOOOOOO|// \/\/\/\/\/\/\/\/\/ Harald Koenig // / \\ \ koenig@tat.physik.uni-tuebingen.de ^^^^^ ^^^^^