1 Replies - 1273 Views - Last Post: 10 September 2012 - 08:46 PM Rate Topic: -----

#1 atraub  Icon User is offline

  • Pythoneer
  • member icon

Reputation: 759
  • View blog
  • Posts: 2,010
  • Joined: 23-December 08

File IO and huge strings

Posted 09 September 2012 - 09:47 PM

I've always been taught that when doing File IO, you want to keep the file open for as short a time as possible. When I was in school, I was taught to use a strategy that we referred to as the "accumulator approach". This approach consists of building the entire output string, opening the file for writing, writing the string to the file, and then immediately closing said file. I've always believed that this was an advantageous approach because IO is slow and reducing the number of writes will speed thugs up and because keeping the file open for shorter periods of time will minimize the chances that a file will be accessed by multiple programs simultaneously.

Over the weekend I applied this approach to some code that exports a database to a text file. In this situation, the output string could become enormous. Do you guys see any problems with this approach in such a situation? Are there benefits to having many small writes rather than one big one if the big one would end up being a string that might be thousands of lines long?

(submitted from my phone, be gentle)

This post has been edited by atraub: 10 September 2012 - 10:21 AM


Is This A Good Question/Topic? 0
  • +

Replies To: File IO and huge strings

#2 atraub  Icon User is offline

  • Pythoneer
  • member icon

Reputation: 759
  • View blog
  • Posts: 2,010
  • Joined: 23-December 08

Re: File IO and huge strings

Posted 10 September 2012 - 08:46 PM

Well, the more I think about it, the more comfortable I feel that there are likely no gotchas associated with using enormous strings.
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1