Core dev meeting #64
4 comments
In a terrible twist of fate I missed this core dev meeting.
If you're interested on my end:
- Rosetta API is 99% done BUT fun news, they have changed it now it's called mesh api there's some changes here and there that I'm applying
- Overall I'm in the phase where I talk with coinbase support to ask specific questions because the api is very broad so it's sometimes unclear how it would apply to us (eg: there's a whole step used to estimate fees which is useless for us)
- Everything has been written with WAX, it works great and I needed zero support, hats off !
- I expect to be done this month
@Blocktrades
Okay, so I guess I'll get started with the stuff we've been working on. So I guess first let's talk about the release. Well, I'll go over what we've done and then I'll talk about release schedule. So we're making the final changes to HiveD right now. There are some changes to the authority staff, but that stuff, it's, so we'll have a new HiveD release out, I guess probably, maybe probably this week, I guess. And beyond that, so the hard fork, we're tentatively setting it for February the 8th, but that's probably not realistic for the real hard fork date or probably later. But that's, because in order to get that, we'd have to actually get a few more changes than that I'd like to get in, but I don't think we'll be able to do with those changes by February the 8th, but we'll see. Everything else is looking pretty good. The last things we're doing on Hive right now is working on improving the way you upgrade Hive. So sometimes it could be difficult if we've made major changes to Hive, doing an upgrade could be problematic. So we've added code to, we've basically changed our upgrade process to make it better at detecting and making sure that changes get broken during an update upgrade. The other thing we're looking at is, well, some of this is too technical, I don't want to even get into it. We've got one other change that we want to make to Hive right now, which is to allow other apps to register creation of indexes on the main Hive tables. We do that in a couple of Hive apps, and I'm sure other apps will find that useful too. I mean, you could do it, but it just wasn't great the way it was done before. There wasn't a central management of the process of adding indexes to the base Hive tables. Hive mine, a lot of work's been done there recently. I've mostly, I've been doing a lot of work there personally. So we've optimized a lot of the queries on the server side, and we've cut the loading to more than half of what it was before. So the loading has really dropped a lot. And this is in production tests. So what we've been doing is, I've set up a node called API.syncat.com, and you guys are probably familiar with it because I've mentioned it a bunch of times before, it was a great place to test the changes to the APIs and make them. It's also where we've been testing production HiveMine, I mean, the HiveMine release candidate. And so we've been sending production traffic from the API hive. blog to that so that we can test to be sure it's handled and loaded well. And initially, the load was pretty high, but it's quite low now. And the biggest change that impacts apps is that we've limited the number of posts that can be fetched, full posts that can be fetched now to 20. Previously, we saw some people that were fetching like 100 apps at once, but that was generating multiple megabytes of data being sent back. So it's not really something we really wanna do anyway, because if you have API calls where you can make very small calls and get megabyte-size responses, that also leaves you open to API, Amplify attacks or someone that basically uses a small amount of their own bandwidth to generate a large bandwidth on your side. So generally not a good idea for us to have APIs doing that. But that's all taken care of. So what I'd like to suggest is that everybody who's got an app, they point, they set up a copy, they point their API node at API soncat.com and start testing there looking for any problems. I don't expect anything except that one issue that I mentioned, which is the 20 posts. But nonetheless, it's well worth checking out your app against ansoncat.com. Let's see. On half API node, which is basically the set of scripts we used to set up an API server, made a bunch of small changes. One of the most notable things was again, in dealing with loading, we've improved the caching and drone a little bit. And this has just been customizing where we found that we could cache some specific API calls better, especially some of the API calls that were kind of costly. I have some ideas in the future for further improvement to how the caching works, but that'll be in the next release, not this one. Let's see. On Half Block Explorer, McFar had this would be interesting to you. We've been doing some work right now that should be finished, I think, this week also to speed up block searching. Block searching was kind of a halfway optional thing in an installation because it was kind of slow and required a bunch of extra indexes. But I guess with Postgres 17, we were able to use fewer indexes, so that should be a lot better now. And so that'll be a nice new interesting feature for half-block explorer. Let's see. Balance tracker, I made some optimizations there to speed up some of the queries that were used by actually half-block explorer. So that will also lower, that lowers loading of half block explorer. So half block explorer's load on the server is when it's sinking out as much, much less. So that'll also be nice. Let's see what else. Wax is getting, I guess, close to done. So we want to basically, just from a scheduling perspective, we plan on releasing all this stuff. And I'm really trying to make it done on, we're going for shooting for the 18th or 19th of this month. So that's why I really want to incentivize you guys to start testing on API.Since if you have an app because it'd be great if we know that there's a problem before we release, after. So anybody's running an API note, of course, this is also scheduled sometime near, it's kind of close to Christmas, I know, but it is a little bit before Christmas. So schedule some time to be ready to upgrade your API note. Let's see. So on the wax side, things that are remaining, we're adding some ability for updating accounting authority. Hold on just a second. Let's see. So wax is pretty, will be finished for, I guess, Python, I mean, for TypeScript, still doing work on Python. So I don't know if we'll have the Python done in time or not, we're still going to see if we can make that version ready for public release. Beekeepers done, we're doing a little work on the Python side with the package that interfaces the beekeeper, but that'll be done quite soon. Let's see, what else? We've fixed some problems in the CI test that cause random fails. So that's mainly useful for developers, but it's nice to have out of the way because it's annoying. Condenser, I think will probably also be releasing, denser rather. Again, I'm not 100% certain on that, but I'm reasonably certain we'll also release a version of Denser too. It will probably run it side by side with Hive blog for a bit before we switch over entirely, though. One other thing, we've been working on a health checker user interface, and this is a standalone component, a TypeScript component that can be included into any TypeScript app. So basically, I'd like to sort of have, if we can, get the app sort of standardized on using it, and of course, we'll take feedback into how to improve it and everything too. But I'm gonna look at that more after I've gonna review the latest version of that after the meeting, and then I'll probably post, once I think it's comfortable in every way, I'll post the link in the Core Dev channel so people can take a look at it and see if they'd like to integrate it in their app or not. So that's kind of a high-level overview. I guess the big thing I wanna announce is the scheduling. So if there's any questions about the scheduling for the releases, we're kind of doing it in the two phases, right? The first phase will be released of the half API node stuff, and there will be a new version of HiveD, but there's probably gonna be a later version of HiveD in the first quarter sometime that'll have yet further changes to it. But there is a HiveD update as part of the half API node release. And again, I won't do that on the 18th or 19th. Okay, any questions about any of that? Concerns?
@Arcane
Yeah, so most API nodes are currently running version 27.5. Is there any benefit to running a new version right now or should we wait for the coming release?
@Blocktrades
Well, the coming release is literally in a matter of days, so I would say wait literally a couple days.
@Arcane
Okay, Any major change in disk space consumption with the new version and the one you are working on?
@Blocktrades
Not really, but there is one issue that if you have a model, if you're probably gonna start with the model, I think a full-size block log in the new one automatically splits that up into individual one-megabyte pieces. So it's generally recommended, you can turn off that behavior, first of all. So there's an option to disable that so you can still work with one full-size block log. But if you wanted to split it up into multiple blocks, if you wanted to split up your full-size block log, so in other words, if you're not gonna do a sync, but if you're actually gonna take your existing monolithic block log and split it up, you're gonna need basically two X to space because you'll have the monolithic block log while it's splitting up into single pieces. So you'll temporarily need another 500 gigs during that process. So there's a couple ways you can do this. One way is if you have two file systems, you could probably put the block log on a slower disk and then have it split it onto your faster disk, for instance. Or if you really just don't have the space, you could just do a sync instead of having it split your existing block log.
@Arcane
So I guess upgrading will require a replay.
@Blocktrades
Yes, yeah, we're definitely planning this to be a replay, a replay of HiveD or replay of half.
@Arcane
Okay, has there been any improvement in replay time?
@Blocktrades
Yeah, I mean, everything has been improved to different levels. It especially helps if you have better hardware.
@Arcane
That's the ultimate replay benefit. We just finished ours. So I will need to do another one. So I would like to know if.
@Blocktrades
There certainly are some speed-ups and I'm going to try to make. I mean, but there's also more stuff, right? Because now half block explorer is now a standard part of the API stack, which that also means the reputation tracker and balance tracker are also new parts of the stack that didn't exist before. Hive mind sync time is the worst of it still. And I don't know that's been speeded up much. We didn't really focus too much on speeding that up. The other apps are faster, but if your total time is still going to be dominated by Hive Mind time, which I think on our fastest machine Hive Mind itself now syncs in about two and a half days, but for you guys, it's going to be longer. I would expect three to four days for a Hive-Mind sync.
@Arcane
It's around three days, yeah. So we should be the same for that part.
@mcfarhat
Yeah, I'm still syncing hive mind. And our machine is not, I mean, it's not anything short of powerful, but yeah, it takes time. So it's been two days now.
@Blocktrades
I could do it in two and a half days.
@mcfarhat
Yeah. Yeah, I had a question actually, Dan. So, you know, before doing all this replay stuff, I tried to grab a snapshot. I think you guys are hosting this on snapshot.hype.log for the full half, right, instead of doing the replay. I couldn't manage to really do the whole thing properly. There were a couple of files, one of them with blockchain, one of them full without blockchain and then the incremental stuff. The instructions on half API node are not pretty much clear how to do this proper restoration. So I couldn't figure it out. I don't know if, yeah.
@Blocktrades
Okay, I'll check that. If you, maybe offline, if you can leave me any questions you had about it, because I haven't read that in so long. I don't remember what it says there. But if you could point out any particular issues you had, just offline, send them to me and I'll look over what's there.
@mcfarhat
Yeah, it's just, yeah, you know.
@Blocktrades
Yeah, I mean, that's a great point. We should probably, I also need to update that stuff for the new version. So, and so it'd be a good time to improve the documentation there as well.
@mcfarhat
Yeah, absolutely, I agree. Because I mean, it's definitely not clear. Still the old instructions, it's have been there for like several months, but since the snapshots are now available, so it's not very clear whether to run the main file and then the incremental and what to do exactly. So even the URL, I mean, it says whatever.net, you know.
@Blocktrades
Okay, like I said, I haven't even read that since it was written back ages, ages ago. So I don't even know what to say.
@mcfarhat
Yeah, yeah, yeah.
@Blocktrades
But okay, so that's, but it's good because you brought up an action item that I kind of forgotten about, which is we need to update the state of the state.
@Gtg
So, so instructions haven't changed, but the initial snapshot is for the old version and the incrementals are targeted at the original one. So it's a.
@Blocktrades
Yeah, we got to update on this for the.
@Gtg
Yeah, yeah, yeah. If you are familiar with the CFS snapshots, it should be clear, but for like, it's not really clear for just copy-pasting because nowadays there are a lot of those incremental and you can actually do the shortcuts because having frequent snapshots takes a lot of space and you can just skip them if you actually just need to go to the recent.
@mcfarhat
Yeah, to be honest, I mean, I'm not very much familiar with how to restore CFS snapshots. The FS is, I mean, something that I recently started exploring once we set up our node, but I mean, I tried restoring the full version without blockchain and then it wouldn't work and then I tried the blockchain only, it wouldn't work. Try the incremental, it didn't work. I mean, I understand incremental need a base to start with, but I mean, nothing worked. So, I wanted to replay, I don't know why.
@Blocktrades
We should have been able to do it without it. If you didn't, if you just did a full snapshot, that should have worked. So, I'm not sure what happened there.
@mcfarhat
Yeah. Yeah, and you can. You know, I had the issue done with the environment file, so maybe.
@Blocktrades
Oh, yes, that would have done you in for sure. With the wrong environment file, you weren't gonna get very far. So, I think this will be much better. Okay, so that in fact is almost certainly.
@Gtg
Yeah, also if you're using like recent version.
@Blocktrades
Yeah, that's what happened. He was using a recent environment file with the old stuff and they're just totally compatible.
@Gtg
Also, I don't want to spoil the fun, but if you like. Probably the most useful is for those who has actually the fast infrastructure when it comes to the network. So, probably you will be faster to just replay your note instead of getting snapshots unless your network is really fast and certainly really slow.
@Blocktrades
It sounds like we should work with him because it sounds like he's got a fast network connection. So, he can be a good test for that site.
@Gtg
Oh, okay. So, once you're happy to try it out.
@Blocktrades
The new snapshots and state files will. You'll be the first person to contact.
@Gtg
One, second.Okay, but excellent.
@Blocktrades
Well, you're probably doing the work setting it up. So, I don't think you count.
@mcfarhat
Yeah. Yeah, I'm just bragging that my network is faster. So, don't worry. I don't know. I'm running on hats nerves. I don't know. It's quite a bulky machine. I'm not trying to be competitive.
@Blocktrades
Yeah, no, no, no. So, the snapshots are really like, I stopped providing snapshots for the old style of hive mind and account history because it stopped making sense with the big files that you download and then you actually, when you start replaying and then I'm pressing if you're not paying attention, which is actually faster than, you can have like old style account history notes replayed faster than you can download it and load snapshots after I'm pressing. So, it's mostly very useful if you actually know if it will be faster for you. Because like some witnesses were trying to use my snapshots for witness purposes. And if it's the loading cloud, like 10 megabytes per second, it's not actually making it faster for them. It's better to just sync from scratch because of the same speed you will have just syncing.
@mcfarhat
All right, all right, fair enough. You guys want me to, maybe I mean, I'm happy to give some updates on our app.
@Blocktrades
Yes, absolutely. I think we got all the questions out of the way for the day because it looks like there's not a whole lot of people to ask questions. So, yeah, go ahead.
@mcfarhat
All right, so yeah, I'll share some updates. I mean, regarding block explorer, so we've done several upgrades to different components that were being used in Block Explorer. We introduced a new search component that allows us to better utilize the autocomplete feature which was already available in our search bar where you have the auto-suggest for the account names. Now it's available in all the new sections of block search, account search, and a much more robust component that is better configurable and more responsive to keyboard actions and whatnot. We also switch the calendar component we're using that has also more functionality and allows us more meticulous configurations to prevent people putting a forward time in the future as a starting point and backward time in the past for the target date which was causing improper responses and incorrect data with more validation to the backend. There were some pagination issues all across the block explorer. So we also worked on fixing this. We improved the overall performance of the pagination, avoiding errors and whatnot. We're currently doing some refacting on the searches section as suggested by one of our team members. He said, it's too bulky. Let's try to break it down and improve the whole search code. So he's working on this. We updated some more witness details on the user page. So when you go to your user page, if the person is a witness previously, it just said it's a witness. If he is deactivated, you could not tell what is his user rank or witness rank. You could not tell what is the link to his announcement post or witness link. So all of these are now also available on the main user page. If you go to a person's profile or a witness profile, we did the revamped for our settings menu or the side menu on the mobile view. It becomes more user-friendly now.
@Blocktrades
How does it feel about a mobile? Is it just the app okay for mobile in general?
@mcfarhat
Yeah, I would say it's much nicer now. We've done a lot of enhancements, a lot of fixes on the mobile side of things. So yeah, I mean, I'm really happy with the progress. What else we created recently, we created kind of like how Hive Blocks used to have a single post page where you can see the JSON data of the single post with how it is by default, the whole data and the properties. Yeah, but I did not merge this yet because the dev used the old condenser code to fetch the query. I was not, I mean, eager to use this, but we couldn't find any other way at the moment to fetch the data. I don't know if you're aware, if there's a way to get the post content and post data other than using condenser at the moment. Is there some REST API? Is there some way via Wax which you can use to grab the data instead of a condenser API?
@Blocktrades
That's essentially gonna be a, it's gonna be a HiveMind call.
@mcfarhat
Okay.
@Blocktrades
And usually if there's a denser version of it, there's a bridge version of it, which is usually the same thing. I mean, except it's better implemented. So it's probably, there's probably a bridge call, but if you just send me the specifics on the call, I can check it because I'm actually work, that's what I'm actually working on right now is HiveMind. So I'm more familiar than I normally am with those API calls. I can look it up pretty quick.
@mcfarhat
Okay, I'll send you the, yeah, excellent. I'll send you the API calls we're currently using and see if we can use any alternative because I'm trying to also fade away from a condenser API.
@Blocktrades
Okay, so another test I've kind of got for you guys too, actually is the new health checker. It's got the old health checker integrated, I'm ready, but I wanna update and use the new health checker. So like I said, I'm gonna check that after this meeting today. And if it looks good, I'll contact you. I'll create an issue for that for you.
@mcfarhat
Okay, okay, excellent, excellent.
@Blocktrades
I didn't mean to interrupt you too. I don't know if you had anything else.
@mcfarhat
Oh, no, no, no, no problem. We've also made some fixes for the witnesses' schedule page. So there were some blocks being missed where witnesses are not being properly tracked for producing blocks. This was also fixed. We have better handling for no-result searches. We've done some GUI improvements. We introduced a couple of font-awesome icons on the witness page. We're starting to work on some more integration of balanced tracker data into the block explorer. We're trying to expand, to bring some cool functionality to block explorer seeing the history of the balance for the user. So we're starting to, yeah. So we're starting to work on a single page display of the history, but also we're looking to introduce some sort of a chart that on the user's page that shows his history of the balance and how his account is growing. So yeah, basically this is it for the last month.
@Blocktrades
Okay.
@mcfarhat
If you have any questions, I'm happy to answer.
@Blocktrades
No, I don't. I think the one thing I gotta look up on the history is I can't remember how it's implemented right now. There's probably some improvement we might wanna do on the backend related to the history as well. I can't remember the history right now.
@mcfarhat
Okay.
@Blocktrades
So one of the things we found handy in another application where we had a similar thing with history was, internally it's got like a bunch of data points for all the changes, but then when you actually go to graph something like that, you just want the amount of points you want is basically relative to how far you're zoomed out, because you basically don't want more points than you have pixels. Otherwise, you're just getting data, that's just, you're just getting a larger than needed data response. And I don't know if the current API does that kind of work, but if not, we'd probably wanna have somebody assigned. Maybe Xander could do it eventually to basically do that kind of filtering on the backend side so that it cuts down the points to the amount of points you need.
@mcfarhat
I see, I see, I see. Okay, that makes sense.
@Blocktrades
That's really important. You can imagine easily, right? Because if you're zoomed out and you don't want 10 million points just because you're looking at a long period of time.
@mcfarhat
Exactly, exactly. Yeah, yeah.
@Blocktrades
So that might be a blocker for this issue too, for you right now, for that particular one.
@mcfarhat
I'm just having one of the devs try to look into how we can properly implement this. So they'll probably run into this issue like you mentioned. I mean, they're just trying to capture the whole idea of what we can do, but.
@Blocktrades
Yeah, and the UI and everything we specced out there too, right? You can have everything almost specced out except just the API call at the end. So it's not like it's a big deal. It's not a complete blocker. I just mean to final implementation, you'd need probably back at code changed most likely, I think. OK, anything else? I guess that's probably it.
@mcfarhat
No, not from my end. Thank you, thank you.
@Blocktrades
Arcane, did you have anything to report?
@Arcane
No, nothing else. OK, I guess that's it for today then. Thanks, guys.
@mcfarhat
Awesome, thank you guys. Have a good day.
@Arcane
Yeah, goodbye next. Bye, bye. See you.
Comments