Come one, come all!
SWapi is ready for beta and I need testers. Email me if you have an Android device to become a beta tester and you too can experience the geeky thrill as you navigate through data collected from the Star Wars films (ep 1-6) as collected by the gracious @swapico.
Caching speeds things up by a factor of about two. These are just two pulls so they speed up could be off by a smidge.
Faster is of course better so I’m sure there are a few tricks to speed this up more; the goal of which is to allow only Internet speed from slowing things down.
So slow without caching. 6 MINUTES! And that’s just to gather and parse the film data where there are only six entries. The speedup will be entirely through not using the data connection over and over and over and over to normalize a URL to a name.
There are more improvements with my caching I’m sure since I still search through the entire cache. Perhaps some hashing is in order.
Note to self: invest in a cache manager of some kind because sweet Lucy Flawless you are doing way too many URL requests now.
Probably a new class to handle cache activities of all kinds. Reusable code is king!
Much useful. Very confuse.
Now that there’s pulled data it should be normalized. What does that mean? For strings they shouldn’t look like they were formatted for a computer and instead of those pesky arrays full of links they should populate lists of what the links point to.
The names thing is pretty straightforward. I was going to have a unique if-statement for each kind of key so they look better but sweet peppered okra that’s way too many if-statements. They’re not particularly masked so instead I’ll do some simple String conversions.
The arrays of links is another thing entirely. All web requests are being done asynchronously so as not to tie up the UI thread so doing some more won’t be too bad. Another method called by the async thread isn’t a problem. Pulling all that data is going to get annoying but I’m trying to stay away from using a client-side database; cached files are the limit I’m willing to go (there’s no need for me and the API guy to hold the same info, that’s silly).
Fortunately the caching helps since the normalization process is happening with the data pull they’ll be stored normalized and can be retrieved normalized. The regular cache update rules (currently at 5 minutes but that does seem a tad aggressive) can be followed.
The first release of SWapi, still in alpha but moving right along. If you’re part of the Blackout Productions Games release group you’ll be able to find the app in the market here.
If you’d like to be part of the testing release group email the team for an invite.
Dumped the idea of pulling individual IDs and am instead pulling by “pages” which the API supports naturally. This stops problems of IDs not existing but trying to pull anyway and oddly numbered items (they’re not all sequential).
Also implemented a cache system so it doesn’t have to poll the API constantly. Because of the paging this was a necessity since it’s dumb to use data to get data that you just got and each page contains eight or so entries.
I’m working to tweak the caching system a bit so it’ll handle clearing itself out then I’ll move onto using the expandable lists I wanted to use initially. Later on maybe a searching algorithm or something because so far the entries are not at all sorted. Look for an alpha release later tonight!
Since you’ve been good, have a screenshot!
Note to self: make sure ID requests are valid before assuming there’ll be data there. Don’t know quite how to do that yet but storing some data may be necessary at some point even though I’m trying to avoid it.
In “Duh” News, you really shouldn’t try to load everything from an API; there’s just too much.
Going to try loading chunks at a time by checking for a further scroll motion past current limits. I’ve seen it before and it makes sense.
The expandable list items are going to be tricky too, lots of custom array adapters to make.