Recent Changes

As some of you may have already noticed, there were some changes made to the Foolz Archive this week.

FoolFuuka 2.0.0

A few days ago, we recently updated the software used to run the Foolz Archive. Now, we are running the latest “stable” iteration of the software in our production environment. Previously, this was only available in our beta testing environment and was stuck in “development hell” for over half a year. Finally, we got around to completing it and merging it into our master branch. This update contains many goodies that boosts performance and provides additional benefits. We are also using HHVM (Facebook’s HipHop Virtual Machine) to power this build as well. We should finally get some faster page generation and loads for everyone! I would like to make an entire list of all the new changes, but that will be done in another blog post for other users to read before upgrading.

Anyway, we did try to test the software to the best of our abilities, but our group of beta testers weren’t too reliable for the most part besides a select few. We often didn’t get completely understandable bug reports/issues or they were never reported at all. The only way we would be able to address them would be using our production environment and hope for the best while monitoring our error logs. Therefore, we would appreciate it if you take the time to submit any bugs or issues you encounter on our /dev/ board or GitHub. This would help us iron out any of the remaining problems.

Foolz Archive Beta

With the recent release of FoolFuuka 2.0.0, we took the opportunity to make some changes to the beta program. We have removed the requirement of user accounts/registrations and replaced it with a keys that will be made available through random sources. If you previously had beta access, you will need to find the key to access it now. We don’t really make it that hard to find.

Sphinx Search

There are a few changes made to the configuration file for our Sphinx Search daemon. I will address each of them below and provide an explanation for the change as well.

min_word_len = 2
Previously, this was set to 1.

This is the one change that many people would notice when doing searches for single letters or single letters with a wildcard attached. It was a trade-off that was necessary for a responsive server. Previously, the search daemon and the indexer both generated large sums of load and utilized high disk I/O. It was abnormally high that our search server would become unresponsive for a long time until we were able to kill the processes. This change resulted in less CPU load required to expand words by stopping at the length of 2 and a reduction of our search indexes by 50% in size.

max_children = 15
Previously, this was set to 0.

This controls the total amount of children the search daemon would fork. Since it was previously set at 0, this meant that the search daemon would spawn as many children necessary as possible to fulfill all of the concurrent searches requested. This often lead to a high amount of children being spawned which generated a high amount of CPU load and DISK I/O. This would result in an unresponsive server when the indexer runs in the background at the same time.

We set the maximum number of children allowed (concurrent searches) at 15 to help resolve our issue previously stated. This would ensure that the server hosting the search daemon remain responsive and accessible at all times. Currently, FoolFuuka will spit out an incorrect error stating that the search daemon is offline instead of stating that the search daemon is busy processing ongoing requests and to try again later. We will eventually fix that once we track down the specific error code that relates to that specific “error”. However, your request should be processed after a simple refresh.

The /v/ Archive

Yes, we made a blog post about this a few days ago, but it appears that some people still didn’t read it or choose to ignore all of the facts stated in that post. Anyway, we (pretty much me alone) decided what to do with the /v/ archive. I will address that in another post in the next few days.